CN111179350B - Hip joint image processing system - Google Patents

Hip joint image processing system Download PDF

Info

Publication number
CN111179350B
CN111179350B CN202010090208.2A CN202010090208A CN111179350B CN 111179350 B CN111179350 B CN 111179350B CN 202010090208 A CN202010090208 A CN 202010090208A CN 111179350 B CN111179350 B CN 111179350B
Authority
CN
China
Prior art keywords
image
prosthesis
hip
image processing
bone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010090208.2A
Other languages
Chinese (zh)
Other versions
CN111179350A (en
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changmugu Medical Technology Qingdao Co ltd
Zhang Yiling
Longwood Valley Medtech Co Ltd
Original Assignee
Changmugu Medical Technology Qingdao Co ltd
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changmugu Medical Technology Qingdao Co ltd, Longwood Valley Medtech Co Ltd filed Critical Changmugu Medical Technology Qingdao Co ltd
Priority to CN202010090208.2A priority Critical patent/CN111179350B/en
Publication of CN111179350A publication Critical patent/CN111179350A/en
Application granted granted Critical
Publication of CN111179350B publication Critical patent/CN111179350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention discloses a hip joint image processing system, comprising: an image processing module adapted to perform an image processing method for a hip joint to generate an annotated bone image with annotation data; a prosthesis placement module adapted to determine a prosthesis to be placed and a location of the prosthesis; a functional module adapted to obtain the determined prosthesis placement plan; wherein the image processing module is adapted to: acquiring a hip joint image including a hip joint; inputting the hip joint image into a hip joint segmentation model component determined from the operation information, and acquiring a three-dimensional bone model image only including bones; and generating a labeled bone image with labeled data by labeling the positions of the key points in the three-dimensional bone model image. By adopting the method and the device, the three-dimensional measurement data with high accuracy can be provided for the replacement of the prosthesis before the total hip replacement operation.

Description

Hip joint image processing system
Technical Field
The invention relates to the technical field of image processing, in particular to a hip joint image processing system.
Background
With the rapid development of digital medicine, the application of the digital technology in the surgical operation is more and more important, and the digital operation planning overcomes the visual limitation of surgeons, so that the data measurement is more accurate, the diagnosis is more accurate, and the operation is more accurate and more efficient.
For orthopedic surgery (e.g., hip replacement surgery), conventional methods use a combination of X-ray films and prosthesis templates to plan and plan the surgery. The traditional planning method uses a prosthesis template to compare on an X-ray, because the scale of the X-ray and the template is not uniform, the size of the prosthesis actually used in the operation cannot be accurately predicted according to the planning result, and the traditional method cannot display three-dimensional parameters and evaluate the space position of the prosthesis, so that the preoperative planning is performed by an operator by using the traditional method, the time consumption is long, and the situation that the preoperative planning information does not accord with the size, the model and the space position information of the prosthesis actually used often occurs. Therefore, the traditional planning method has long planning time and inaccurate results, cannot display information such as three-dimensional parameters, spatial position angles and the like, cannot effectively reduce the operation difficulty, sometimes even provides wrong information to clinicians, and increases the operation risk.
In view of this, an accurate and efficient three-dimensional preoperative planning system is needed, which can accurately provide anatomical parameter information of hip joint image data, automatically and efficiently calculate key anatomical parameters, match with a database artificial joint prosthesis, and provide a most suitable recommended surgical plan so as to better assist a professional doctor to formulate a surgical plan.
Disclosure of Invention
To this end, the present invention provides a deep learning based hip image processing method and computing device for the hip in an attempt to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the present invention, there is provided a hip image processing method based on deep learning, the method being adapted to be executed in a computing device, comprising the steps of: acquiring hip joint information corresponding to a hip joint to be operated, wherein the hip joint information comprises a hip joint DICOM image of the hip joint and operation information related to an operation; inputting the hip joint DICOM image into a hip joint segmentation model component determined from the operation information to obtain a three-dimensional bone model image comprising bones; and generating a labeled bone image with labeled data by labeling the positions of the key points in the three-dimensional bone model image.
Optionally, in the method according to the invention, the hip image is a medical image stored in DICOM format.
Optionally, in the method according to the invention, the surgical information comprises the position of the hip joint in the human body, the affected disease and the category to which the prosthesis to be placed belongs.
Optionally, in the method according to the present invention, the step of inputting the hip image to a hip segmentation model component determined from the surgical information to obtain a three-dimensional bone model image including a bone comprises: determining a hip joint segmentation model component corresponding to the class by using the class to which the prosthesis to be placed belongs in the operation information; respectively inputting the hip joint images into the hip joint segmentation model component, and acquiring each two-dimensional image only comprising bones, wherein each two-dimensional image comprises images of mutually segmented pelvis, left femur and right femur; and overlapping the two-dimensional images on the space to generate a three-dimensional bone model image only comprising bones.
Optionally, in the method according to the present invention, the hip segmentation model component is obtained by performing machine learning based on correspondence between a plurality of training hip images and a plurality of training bones extracted from the plurality of training hip images, respectively.
Optionally, in the method according to the present invention, the step of spatially superimposing the respective two-dimensional images to generate a three-dimensional bone model image including only bone comprises: setting the left-right direction of the pelvis in each two-dimensional image as an x-axis direction, setting the CT scanning direction as a z-axis direction, and setting the direction vertical to the plane formed by the x-axis and the z-axis as a y-axis direction; wherein, the left-right direction of the pelvis refers to the direction from the left side of the pelvis to the right side of the pelvis according to the horizontal direction; and overlapping the two-dimensional images along the z-axis direction in space to generate a three-dimensional bone model image only comprising bones.
Optionally, in the method according to the present invention, the step of generating an annotated bone image with annotation data from the annotated bone image comprises: identifying the position of a preset key point in the three-dimensional bone model image; and marking the positions of the key points in the three-dimensional bone model image to generate the marked bone image with marked data.
Optionally, in a method according to the invention, the key points comprise one or more of an anterior superior iliac spine, a pubic symphysis, a lesser trochanter, a femoral head center of gravity, a medullary cavity axis, and an acetabular axis.
Optionally, in the method according to the present invention, the step of generating an annotated bone image with annotation data is followed by: and correcting each part in the labeled skeleton image to generate a corrected labeled skeleton image.
Optionally, in the method according to the invention, the corrective treatment comprises a corrective treatment for the pelvis and a corrective treatment for the bilateral femurs.
Optionally, in the method according to the present invention, the step of performing correction processing on each part in the annotated bone image to generate a corrected annotated bone image for correction processing of the pelvis includes: determining three key points including combination of bilateral anterior superior iliac spines and pubic bones; and (3) utilizing an APP plane formed by the three key points to enable the APP plane to be vertical to the y axis so as to generate a corrected labeled bone image as a labeled bone image.
Optionally, in the method according to the present invention, the step of performing correction processing on each part in the annotated bone image to generate a corrected annotated bone image for correction processing of both femurs comprises: determining the center of gravity of the femoral head and the axis of a medullary cavity in the marked bone image; forming a correction plane by using the gravity center of the femoral head and the axis of the medullary cavity; and correcting the correction plane in a direction parallel to a plane formed in the x-axis direction and the z-axis direction to generate a corrected labeled skeleton image serving as a labeled skeleton image.
Optionally, in the method according to the present invention, the step of generating an annotated bone image with annotation data is followed by: and determining the prosthesis to be placed and the placement information of the prosthesis according to the marked bone image.
Optionally, in the method according to the invention, the step of determining the prosthesis to be placed and the placement information of the prosthesis from the annotated bone image comprises: determining the size and the model of the acetabular cup prosthesis and placement information according to the marked bone image, wherein the placement information comprises the angle of placement of the acetabular cup and the position of placement in a three-dimensional space; and determining the rotation center of the femur, the axis of the femoral medullary cavity and the size of the femoral medullary cavity according to the marks in the marked bone image, and determining the size and the model of the femoral stem prosthesis and the placement position of the three-dimensional space according to the rotation center of the femur, the axis of the femoral medullary cavity and the size of the femoral medullary cavity, wherein the placement position comprises the inversion and eversion angles of the placed prosthesis and the placement position.
Optionally, in the method according to the present invention, the step of determining the model and placement information of the acetabular cup prosthesis from the annotated bone image comprises: identifying a lunar surface of the acetabulum in the annotated bone image; and determining the rotation center, the size and the model of the acetabular cup prosthesis and three-dimensional placement information by utilizing the sphere fitted by the lunar surface.
Optionally, in the method according to the invention, the step of determining the model and placement position of the femoral stem prosthesis according to said dimensions comprises: determining a preliminary position of the femoral stem prosthesis from the acetabular center of rotation and the positions of bilateral femoral lesser trochanters; and determining the size and the model of the femoral stem prosthesis and the placement position of the femoral stem prosthesis by fitting with the axis of the femoral medullary cavity and the size of the femoral medullary cavity on a plurality of different layers.
Optionally, in the method according to the present invention, the step of determining a prosthesis to be placed and placement information of the prosthesis from the annotated bone image further comprises: and calculating the femoral stem prosthesis and the acetabular cup prosthesis according to evaluation factors to generate evaluation information.
Optionally, in the method according to the invention, the evaluation factor comprises one or more of: acetabular cup coverage, whether the two lower limbs are equal in length, and lateral offset.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processors; and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods described above.
According to a further aspect of the invention there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
According to the scheme of the invention, accurate hip joint data can be provided for prosthesis placement before the prosthesis placement is carried out. The size of the prosthesis model and the position where the prosthesis is placed can be determined by generating a three-dimensional stereo image including only bone information. Furthermore, the bones in each image can be more accurately identified by using a trained computer model, and the accurate three-dimensional image can be generated by using the bones, particularly in orthopaedics, the scheme of the invention can help a doctor to position the actual positions of the acetabular cup prosthesis and the femoral stem prosthesis to the maximum precision, and help an operator to estimate the placement position of the prosthesis in a full view angle on three-dimensional information and accurately plan the model and the position of the prosthesis before an operation, and after the prosthesis is placed, the pelvis or the femur can be rotationally observed by 360 degrees to estimate the placement angle of the prosthesis, and meanwhile, the surgical parameters such as the acetabular cup coverage rate, the leg length, the offset and the like after the prosthesis is placed can be estimated. Shortening the learning curve of young clinicians, and reducing the incidence of complications such as dislocation of prosthesis, prosthesis loosening, pain, etc.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a configuration of a computing device 100 according to one embodiment of the invention;
FIG. 2 shows a schematic flow diagram of an image processing method 200 for a hip joint according to one embodiment of the invention;
FIG. 3 shows a diagram of a hip replacement according to one embodiment of the present invention;
fig. 4 shows a schematic diagram of an image processing system 400 for a hip joint according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a block diagram of an example computing device 100. In a basic configuration 102, computing device 100 typically includes system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processor, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some embodiments, application 122 may be arranged to operate with program data 124 on an operating system. In some embodiments, computing device 100 is configured to perform image processing method 200 for the hip joint, with program data 124 including instructions for performing the methods described above.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, image input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in a manner that encodes information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media. In some embodiments, one or more programs are stored in a computer readable medium, the one or more programs including instructions for performing certain methods (e.g., method 200).
Computing device 100 may be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Of course, the computing device 100 may also be implemented as a personal computer including both desktop and notebook computer configurations, or as a server having the above-described configuration. The embodiments of the present invention are not limited thereto.
Fig. 2 shows a flow diagram of a method 200 for image processing of a hip joint based on deep learning according to an embodiment of the invention. In preoperative planning of hip joints on the body, the image processing method of the present application can be used to determine the appropriate prosthesis and the placement of the prosthesis. To facilitate understanding of the invention, fig. 3 shows a diagram of a hip replacement.
The hip joint mainly comprises a convex femoral head and a concave acetabulum, wherein the convex femoral head is spherical, and the concave acetabulum is like a bowl, so the hip joint consisting of the femoral head and the acetabulum can be stable under the surrounding of peripheral ligaments and muscles, and simultaneously, the hip joint can freely move towards all directions like a ball slides in the bowl. The joints connecting the lower limbs and the pelvis are important joints for receiving impact force generated when people move (stand, walk and run jump).
As shown in fig. 3, hip replacement is the replacement of the damaged acetabulum and femoral head with an artificial bowl prosthesis 301 and a ball prosthesis 302, respectively. In the present invention, the model and placement position of the prosthesis 301 and the prosthesis 302 can be determined by using the image processing method of the present invention, thereby facilitating the subsequent operation.
The flow of the image processing method 200 for the hip joint according to the image processing method of the embodiment of the present invention will be described in detail below with reference to fig. 2.
In step S210, hip information corresponding to a hip to be operated on is acquired, wherein the hip information includes a hip image of the hip and operation information related to an operation.
Specifically, in the medical field, the structure and density of internal tissues and organs of the human body can be represented in an image form by means of interaction with the human body through a medium (such as X-rays, electromagnetic fields, ultrasonic waves and the like) so as to be judged by a diagnostician according to information provided by the image.
DICOM is widely used in radiomedicine, cardiovascular imaging and radiodiagnosis (X-ray, CT, nuclear magnetic resonance, ultrasound, etc.) and is increasingly used in ophthalmology, dentistry, and other medical fields. Medical images of all patients are stored in the DICOM file format. This facilitates the analysis of medical images in the same format by the technician.
Applied to the present invention, a physician may scan the hip joint using CT, generate DICOM formatted files, and save these files. In order to perform the method faster, DICOM files of different patients may be stored in different folders, for example, if there are three patients, hip images corresponding to the three patients are stored in the three folders, respectively.
In addition, surgical information relating to the surgery may also be obtained. The surgical information includes the position of the hip joint in the body, the disease to be affected and the category to which the prosthesis to be placed belongs.
As is well known, each person includes hip joints on both left and right sides, i.e., has hip joints connecting the thigh and the pelvis on both left and right sides of the body, respectively, and thus, when performing preoperative planning, the orientation, e.g., left or right, of the hip joint on which the operation is to be performed can be determined.
Furthermore, the hip joint disease of the patient can be determined, and the disease types comprise femoral head necrosis, femoral neck fracture, congenital hip joint dysplasia, osteoarthritis, ankylosing spondylitis and the like. After the patient's disease category is identified, prosthetic products associated with the disease category, such as acetabular cup, liner, bulb, femoral stem, may be provided.
Subsequently, step S220 is performed, the hip joint image is input to the hip joint segmentation model component determined from the surgical information, and a three-dimensional bone model image including only bone is acquired.
Specifically, the hip images are input to the hip segmentation model component, and two-dimensional images including only bones are acquired, wherein the hip segmentation model component is obtained by machine learning from correspondence between a plurality of training hip images and a plurality of training bones extracted from the plurality of training hip images. In one embodiment, the training bones are the pelvis, the left femur, and the right femur. In this way, the two-dimensional images acquired by the hip segmentation model component, which include only the bone, include an image of the pelvis, an image of the left femur, and an image of the right femur, which are segmented from each other.
In the implementation, a plurality of training hip joint images can be obtained, training bones are respectively extracted from the training hip joint images in a manual marking mode, and then the hip joint segmentation model assembly is constructed and provided with training parameters; training the hip joint segmentation model component by utilizing the corresponding relation between a plurality of training hip joint images and a plurality of training bones extracted from the plurality of training hip joint images respectively, and adjusting the training parameters until the hip joint segmentation model component reaches the preset requirement. Wherein the hip segmentation model component may comprise a model component formed using existing machine learning algorithms. Thus, information such as a CT bed and soft tissue that is not necessary for the operation can be removed.
Subsequently, spatially overlaying the respective two-dimensional images to generate a three-dimensional bone model image including only bone, comprising: setting the left-right direction of the pelvis in each two-dimensional image as an x-axis direction, setting the CT scanning direction as a z-axis direction, and setting the direction vertical to the plane formed by the x-axis and the z-axis as a y-axis direction; and overlapping the two-dimensional images in the space along the z-axis direction to generate a three-dimensional skeleton model image.
In practice, a plurality of two-dimensional images containing only bone may be acquired, which are acquired after the same hip joint is photographed from different angles, and therefore, it is possible that some of the two-dimensional images contain images of x-axis and z-axis planes and some of the two-dimensional images contain images of x-axis and y-axis planes, and by superposition of these, a three-dimensional bone model image containing only bone may be generated.
Subsequently, in step S230, an annotated bone image with annotation data is generated by annotating the locations of the keypoints in the three-dimensional bone model image. The method specifically comprises the following steps: locations of predetermined keypoints in the three-dimensional bone model image are identified, wherein the keypoints may include one or more of an anterior superior iliac spine, a pubic symphysis, a lesser trochanter, a femoral head center of gravity, a medullary cavity axis, and an acetabular axis.
And measuring and marking the positions of the key points in the three-dimensional bone model image to generate the marked bone image with marked data, wherein the measured data comprises acetabulum size, offset distance and the like.
Optionally, the step of generating an annotated bone image with annotation data further comprises: and correcting each part in the labeled skeleton image to generate a corrected labeled skeleton image. In implementations, the corrective treatment may include a corrective treatment for the pelvis and a corrective treatment for the bilateral femurs.
Corrective treatments for the pelvis include: 1) three points of bilateral anterior superior iliac spines and pubic symphysis can be obtained, and then the inclination angle of the pelvis is determined by utilizing an APP plane (intrinsic planar) formed by the three points and the angle between the APP plane and the y axis; 2) the connecting line of the anterior superior iliac spines on both sides is parallel to the X axis, so as to realize the correction treatment of the pelvis.
Corrective treatments for the femur include: determining a femoral head center of gravity and a medullary cavity axis in the annotated bone image; forming a correction plane by using the gravity center of the femoral head and the axis of the medullary cavity; correcting the correcting plane in a direction parallel to a plane formed by the x-axis direction and the z-axis direction; meanwhile, the connecting line of the lowest points of the left and right posterior condyles of the femur is parallel to a plane formed in the x-axis direction and the z-axis direction, so that the femur is corrected.
After the correction processing, corrected labeled skeleton images (including correction of pelvis and correction of femur) can be generated as labeled skeleton images, namely corrected key point positions are obtained. Subsequently, according to the positions of the key points, a proper prosthesis can be matched from the artificial hip prosthesis database model, and the three-dimensional preoperative planning of total hip replacement is realized.
Optionally, the method further comprises: and determining the prosthesis to be placed and the placement information of the prosthesis according to the marked bone image. Firstly, determining the size and the model of the acetabular cup prosthesis and the placement information according to the annotated bone image, wherein the placement information comprises the angle of acetabular cup placement (such as acetabular cup anteversion angle, acetabular cup abduction angle, acetabular cup placement depth, acetabular cup rotation center position and the like) and the placement position of the acetabular cup in a three-dimensional space, and the method can be obtained through the following steps: identifying a lunar surface of the acetabulum in the annotated bone image; and determining the rotation center, the size and the model of the acetabular cup prosthesis and three-dimensional placement information by utilizing the sphere fitted by the lunar surface.
In practice, the most suitable size of acetabular cup prosthesis is automatically placed in the most reasonable position through a machine learning algorithm, and in addition, the size and position angle of the acetabular cup prosthesis can be adjusted, observed and evaluated by clinicians according to their own experience and habits. By practice, the angle at which the acetabular cup is placed is typically 40 degrees abduction and 20 degrees anteversion.
After the acetabular cup prosthesis has been identified, the next step may be to operate on the femoral stem prosthesis. Specifically, according to the labels in the labeled bone image (the main label data are the identified key points), determining the femur rotation center, the femur marrow cavity axis and the femur marrow cavity size; according to the femur rotation center, the femur marrow cavity axis and the femur marrow cavity size, determining the size and the model of the femur handle prosthesis and the three-dimensional space placement position, wherein the placement position comprises a placement angle and a placement position, and the placement angle of the femur handle prosthesis can comprise a varus angle and a valgus angle.
When the model and the placement position of the femoral stem prosthesis are determined according to the size, the preliminary position (more accurate position) of the femoral stem prosthesis can be determined through the positions of the femoral rotation center, the femoral axis and the bilateral femoral lesser trochanters. Subsequently, the preliminary position is adjusted by using the placement position of the acetabular cup prosthesis, and the specific model and placement position of the femoral stem prosthesis are determined, specifically, under the condition that the position of the femoral stem prosthesis is obtained by performing back-estimation by using the position of the acetabulum, four points are respectively determined on a plurality of different levels (for example, 3) of medullary cavities of the femoral stem prosthesis, and the 12 points are 12 points in total, and the model and placement position of the prosthesis, which are most overlapped with the femoral cortical bone, are the optimal model, and in short, the model and placement position of the femoral stem are determined by fitting the sizes of the femoral stem and the femoral medullary cavities on three different levels.
In addition, the step of determining the prosthesis to be placed and the placement information of the prosthesis according to the labeled bone image further comprises: calculating the femoral stem prosthesis and the acetabular cup prosthesis according to evaluation factors to generate evaluation information, wherein the evaluation factors comprise one or more of the following: acetabulum coverage, whether the two lower limbs are equal in length and the offset distance between the two sides.
Acetabular coverage is determined by the area of intersection of the acetabulum with the surface of the acetabular cup prosthesis. Whether the limbs are equal in length can be determined by the positions of the small rotors on the two sides, and the offset conditions on the two sides can be judged by respectively comparing the offset on the two sides.
In summary, according to the scheme of the invention, the hip joint data with high accuracy can be acquired, in the process, the skeleton information is accurately identified by machine learning, the interference data irrelevant to the hip joint operation is removed, and the images can be used for forming the three-dimensional image, so that a doctor can judge the state of an illness more intuitively, and further, the formed images can be corrected in different aspects, thereby avoiding the deviation caused by various reasons. On the basis, a proper prosthesis placement scheme can be determined, and can be continuously adjusted in the process, so that the preoperative planning is more reliable. Further, data that the patient or medical person desires to output may be output.
According to the solution of the invention, the prosthesis and the position in which it is placed can be determined by generating a three-dimensional stereo image comprising only information on the bone. Furthermore, the bones in each image can be more accurately identified by using a trained computer model, and the accurate three-dimensional images can be generated by using the bones, particularly in orthopaedics, the scheme of the invention can help doctors to position the actual positions of the acetabular cup prosthesis and the femoral stem prosthesis with the maximum precision, help surgeons to estimate the placement position of the prosthesis in a full view angle on three-dimensional information and accurately plan the model and the position of the prosthesis before an operation, and after the prosthesis is placed, the pelvis or the femur can be observed in a 360-degree rotating mode, the placement angle of the prosthesis is estimated, and meanwhile, the operation parameters such as the leg length, the offset and the like of the prosthesis after the prosthesis is placed can be estimated. Shortening the learning curve of young clinicians, and reducing the incidence of complications such as dislocation of prosthesis, prosthesis loosening, pain, etc.
In implementation, an image processing system 400 for the hip joint as shown in fig. 4 may also be provided. As shown in FIG. 4, the system 400 may include an image processing module 410, a prosthesis placement module 420, and a function module 430.
The image processing module 410 is to perform the image processing method 200 for the hip joint according to one embodiment of the present invention. That is, the image processing module 410 may perform steps S210 to S230 on the hip joint image, generating an annotated bone image with annotation data.
This image may then be input to a prosthesis placement module 420, with which the prosthesis to be placed and the position of the prosthesis are determined. How to determine the prosthesis to be placed and the position of said prosthesis has been described above in detail and will not be described in detail here.
The determined prosthesis placement plan may then be input to the function module 430. The function modules 430 may include an osteotomy module, a simulated motion module, an X-ray generation module, a data generation module, a ball head lining model adjustment module, a bone transparency adjustment module, and a parameter measurement module.
Wherein, the osteotomy module can perform osteotomy after the prosthesis is installed; the motion simulation module can be performed after the bone is cut off and can be used for simulating the motion situation after the prosthesis is installed. The X-ray generation module, the bone transparency adjustment module and the parameter measurement module can be carried out in the whole process of the prosthesis placement module, the data generation module is carried out after the prosthesis is placed, and the modules can be used for carrying out real-time regulation and control in the prosthesis placement process so as to carry out real-time and dynamic adjustment on the prosthesis placement. The ball head liner model adjustment module can be adjusted after the femoral stem is placed. It should be noted that the modules encompassed by the functional modules are not limited to the above sub-modules, and all modules that can be used to make functional adjustments to the placement of the prosthesis can be applied here.
In summary, according to the scheme of the invention, the image processing module can be used for acquiring hip joint data with high accuracy, in the process, the machine learning is used for accurately identifying the bone information, interference data irrelevant to hip joint surgery is removed, and the images can be used for forming three-dimensional images, so that a doctor can judge the state of an illness more intuitively, and further, the formed images can be corrected in different aspects, so that deviation caused by various reasons is avoided. On the basis, the prosthesis placement module can be used for determining a proper prosthesis placement scheme, and can be continuously adjusted in the process, so that preoperative planning is more reliable. Further, the function module may be used to output data that the patient or medical person desires to output.
According to the solution of the invention, the prosthesis and the position in which it is placed can be determined by generating a three-dimensional stereo image comprising only information on the bone. Furthermore, the bones in each image can be more accurately identified by using a trained computer model, and the accurate three-dimensional images can be generated by using the bones, particularly in orthopaedics, the scheme of the invention can help doctors to position the actual positions of the acetabular cup prosthesis and the femoral stem prosthesis with the maximum precision, help surgeons to estimate the placement position of the prosthesis in a full view angle on three-dimensional information and accurately plan the model and the position of the prosthesis before an operation, and after the prosthesis is placed, the pelvis or the femur can be observed in a 360-degree rotating mode, the placement angle of the prosthesis is estimated, and meanwhile, the operation parameters such as the leg length, the offset and the like of the prosthesis after the prosthesis is placed can be estimated. Shortening the learning curve of young clinicians, and reducing the incidence of complications such as dislocation of prosthesis, prosthesis loosening, pain, etc.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The invention also discloses:
a9 the image processing method according to a1, wherein the step of generating an annotated bone image with annotation data further comprises: and correcting each part in the labeled skeleton image to generate a corrected labeled skeleton image.
A10, the image processing method according to a9, wherein the correction processing includes correction processing for a pelvis and correction processing for both femurs.
The image processing method according to a11, a10, wherein the step of performing correction processing on each part in the annotated bone image to generate a corrected annotated bone image for correction processing of the pelvis comprises: determining three key points including combination of bilateral anterior superior iliac spines and pubic bones; determining the pelvis inclination angle by utilizing an APP plane formed by the three key points and utilizing the angle between the APP plane and the y axis; connecting lines of anterior superior iliac spines on two sides are parallel to the X axis, so that a corrected pelvis image is generated and used as a corrected marked skeleton image.
The image processing method according to a12, a11, wherein the step of performing the correction process on each part in the annotated bone image to generate the corrected annotated bone image comprises: determining the center of gravity of the femoral head and the axis of a medullary cavity in the marked bone image; forming a correction plane by using the gravity center of the femoral head and the axis of the medullary cavity; correcting the correcting plane in a direction parallel to a plane formed by the x-axis direction and the z-axis direction; and (3) enabling the connecting line of the lowest points of the left and right rear condyles of the femur to be parallel to a plane formed in the x-axis direction and the z-axis direction, and generating a corrected marked bone image as a marked bone image.
A13 the image processing method according to a12, wherein the step of generating an annotated bone image with annotation data includes: and determining the size of the model of the prosthesis to be placed and the placement information of the prosthesis according to the marked bone image.
A14, the image processing method according to A13, wherein the step of determining a prosthesis to be placed and placement information of the prosthesis from the annotated bone image comprises: determining the model and the placement information of the acetabular cup prosthesis according to the marked bone image, wherein the placement information comprises a placement angle and a placement position; determining the size of a femoral medullary cavity according to the mark in the marked bone image; and determining the model and the placement position of the femoral stem prosthesis according to the size, wherein the placement position comprises the placement angle and the placement position.
A15, the image processing method as claimed in A14, wherein the step of determining the model and placement information of the acetabular cup prosthesis from the annotated bone image comprises: identifying a lunar surface of the acetabulum in the annotated bone image; and determining the rotation center, the prosthesis model and the position placement information of the acetabular cup prosthesis by utilizing the sphere fitted by the lunar surface.
A16, the image processing method as claimed in a15, wherein the step of determining the model and placement position of the femoral stem prosthesis according to the size comprises: determining a preliminary position of the femoral stem prosthesis from the acetabular center of rotation and the positions of bilateral femoral lesser trochanters; and determining the model and the placement position of the femoral stem prosthesis through fitting with the size of the femoral medullary cavity on a plurality of different layers.
A17, the image processing method according to the A16, wherein the step of determining the size of the model of the prosthesis to be placed and the position information of the prosthesis according to the annotated bone image further comprises: and calculating the femoral stem prosthesis and the acetabular cup prosthesis according to evaluation factors to generate evaluation information.
A18, the image processing method according to a17, wherein the evaluation factor includes one or more of: acetabular cup coverage, whether the two lower limbs are equal in length, and lateral offset.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (12)

1. A hip image processing system comprising:
an image processing module adapted to perform an image processing method for a hip joint to generate an annotated bone image with annotation data;
a prosthesis placement module adapted to determine a prosthesis to be placed and a location of the prosthesis;
a functional module adapted to obtain the determined prosthesis placement plan;
wherein the image processing module is adapted to:
acquiring a hip joint image including a hip joint;
inputting the hip joint image into a hip joint segmentation model assembly to obtain a three-dimensional skeleton model image only including a skeleton;
generating an annotated bone image with annotation data by annotating the positions of key points in the three-dimensional bone model image, wherein the key points comprise one or more of anterior superior iliac spine, pubic symphysis, lesser trochanter, femoral head center of gravity, medullary cavity axis and acetabular axis;
correcting each part in the marked bone image to generate a corrected marked bone image, wherein the correction includes correction on the pelvis and correction on the femurs on two sides;
for correction processing of a pelvis, the step of performing correction processing on each part in the labeled skeleton image to generate a corrected labeled skeleton image comprises the following steps: determining three key points including combination of bilateral anterior superior iliac spines and pubic bones; determining the pelvis inclination angle by utilizing an APP plane formed by the three key points and utilizing the angle between the APP plane and the y axis; connecting lines of anterior superior iliac spines on two sides are parallel to an X axis to generate a corrected pelvis image as a corrected marked skeleton image;
aiming at the correction processing of the femurs at two sides, the step of correcting each part in the marked bone image to generate a corrected marked bone image comprises the following steps: determining the center of gravity of the femoral head and the axis of a medullary cavity in the marked bone image; forming a correction plane by using the gravity center of the femoral head and the axis of the medullary cavity; correcting the correcting plane in a direction parallel to a plane formed by the x-axis direction and the z-axis direction; and (3) enabling the connecting line of the lowest points of the left and right rear condyles of the femur to be parallel to a plane formed in the x-axis direction and the z-axis direction, and generating a corrected marked bone image as a marked bone image.
2. The hip image processing system according to claim 1, wherein the hip image is a medical image stored in DICOM format.
3. The hip image processing system of claim 2, wherein the image processing module is further adapted to:
respectively inputting the hip joint images into the hip joint segmentation model component, and acquiring each two-dimensional image only comprising bones, wherein each two-dimensional image comprises images of mutually segmented pelvis, left femur and right femur;
and overlapping the two-dimensional images on the space to generate a three-dimensional bone model image only comprising bones.
4. The hip image processing system according to claim 3, wherein the hip segmentation model component is obtained by machine learning based on correspondence between a plurality of training hip images and a plurality of training bones extracted from the plurality of training hip images, respectively.
5. The hip image processing system of claim 3, wherein the image processing module is further adapted to:
setting the left-right direction of the pelvis in each two-dimensional image as an x-axis direction, setting the CT scanning direction as a z-axis direction, and setting the direction vertical to the plane formed by the x-axis and the z-axis as a y-axis direction;
and overlapping the two-dimensional images in space along the z-axis direction to generate a three-dimensional skeleton model image.
6. The hip image processing system of claim 1, wherein the image processing module is further adapted to:
identifying the position of a preset key point in the three-dimensional bone model image;
and measuring and marking the positions of the key points in the three-dimensional bone model image to generate the marked bone image with marked data.
7. The hip image processing system according to claim 6,
the prosthesis placement module is further adapted to determine a size of a prosthesis model to be placed and placement information of the prosthesis based on the annotated bone image.
8. The hip image processing system of claim 7, wherein the prosthesis placement module is further adapted to:
determining the model and the placement information of the acetabular cup prosthesis according to the marked bone image, wherein the placement information comprises a placement angle and a placement position;
determining the size of a femoral medullary cavity according to the mark in the marked bone image;
and determining the model and the placement position of the femoral stem prosthesis according to the size, wherein the placement position comprises the placement angle and the placement position.
9. The hip image processing system of claim 8, wherein the prosthesis placement module is further adapted to:
identifying a lunar surface of the acetabulum in the annotated bone image;
and determining the rotation center, the prosthesis model and the position placement information of the acetabular cup prosthesis by utilizing the sphere fitted by the lunar surface.
10. The hip image processing system of claim 9, wherein the prosthesis placement module is further adapted to:
determining a preliminary position of the femoral stem prosthesis from the acetabular center of rotation and the positions of bilateral femoral lesser trochanters;
and determining the model and the placement position of the femoral stem prosthesis through fitting with the size of the femoral medullary cavity on a plurality of different layers.
11. The hip image processing system according to claim 10,
the prosthesis placement module is further adapted to: and calculating the femoral stem prosthesis and the acetabular cup prosthesis according to evaluation factors to generate evaluation information.
12. The hip image processing system of claim 11, wherein the evaluation factors include one or more of: acetabular cup coverage, whether the two lower limbs are equal in length, and lateral offset.
CN202010090208.2A 2020-02-13 2020-02-13 Hip joint image processing system Active CN111179350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010090208.2A CN111179350B (en) 2020-02-13 2020-02-13 Hip joint image processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010090208.2A CN111179350B (en) 2020-02-13 2020-02-13 Hip joint image processing system

Publications (2)

Publication Number Publication Date
CN111179350A CN111179350A (en) 2020-05-19
CN111179350B true CN111179350B (en) 2022-04-08

Family

ID=70649674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010090208.2A Active CN111179350B (en) 2020-02-13 2020-02-13 Hip joint image processing system

Country Status (1)

Country Link
CN (1) CN111179350B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111568609B (en) * 2020-05-25 2021-09-28 北京长木谷医疗科技有限公司 Method and device for obtaining coverage rate of acetabular cup prosthesis
CN111652301B (en) * 2020-05-27 2021-03-02 北京长木谷医疗科技有限公司 Femoral lesser trochanter identification method and device based on deep learning and electronic equipment
CN111888059B (en) * 2020-07-06 2021-07-27 北京长木谷医疗科技有限公司 Full hip joint image processing method and device based on deep learning and X-ray
CN112641511B (en) * 2020-12-18 2021-09-10 北京长木谷医疗科技有限公司 Joint replacement surgery navigation system and method
CN112641510B (en) 2020-12-18 2021-08-17 北京长木谷医疗科技有限公司 Joint replacement surgical robot navigation positioning system and method
CN112957126B (en) * 2021-02-10 2022-02-08 北京长木谷医疗科技有限公司 Deep learning-based unicondylar replacement preoperative planning method and related equipment
CN112971981B (en) * 2021-03-02 2022-02-08 北京长木谷医疗科技有限公司 Deep learning-based total hip joint image processing method and equipment
CN113674841A (en) * 2021-08-23 2021-11-19 东成西就教育科技有限公司 Template measuring system for preoperative image
CN113744214B (en) * 2021-08-24 2022-05-13 北京长木谷医疗科技有限公司 Femoral stem placing device based on deep reinforcement learning and electronic equipment
CN113689402B (en) * 2021-08-24 2022-04-12 北京长木谷医疗科技有限公司 Deep learning-based femoral medullary cavity form identification method, device and storage medium
CN113962927B (en) * 2021-09-01 2022-07-12 北京长木谷医疗科技有限公司 Acetabulum cup position adjusting method and device based on reinforcement learning and storage medium
CN113842211B (en) * 2021-09-03 2022-10-21 北京长木谷医疗科技有限公司 Three-dimensional preoperative planning system for knee joint replacement and prosthesis model matching method
CN113926208B (en) * 2021-10-11 2023-08-22 网易(杭州)网络有限公司 Method and device for generating movable doll model, electronic equipment and readable medium
CN114299177B (en) * 2021-12-24 2022-09-09 武汉迈瑞科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114494183B (en) * 2022-01-25 2024-04-02 哈尔滨医科大学附属第一医院 Automatic acetabular radius measurement method and system based on artificial intelligence
CN114419618B (en) * 2022-01-27 2024-02-02 北京长木谷医疗科技股份有限公司 Total hip replacement preoperative planning system based on deep learning
CN114663363B (en) * 2022-03-03 2023-11-17 四川大学 Deep learning-based hip joint medical image processing method and device
CN116597002B (en) * 2023-05-12 2024-01-30 北京长木谷医疗科技股份有限公司 Automatic femoral stem placement method, device and equipment based on deep reinforcement learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859213A (en) * 2019-01-28 2019-06-07 艾瑞迈迪科技石家庄有限公司 Bone critical point detection method and device in joint replacement surgery

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7634306B2 (en) * 2002-02-13 2009-12-15 Kinamed, Inc. Non-image, computer assisted navigation system for joint replacement surgery with modular implant system
CN102125472B (en) * 2011-04-08 2013-04-17 上海交通大学医学院附属第九人民医院 Acetabular prosthesis of artificial hip joint with rotational ellipsoid joint interface
US9167989B2 (en) * 2011-09-16 2015-10-27 Mako Surgical Corp. Systems and methods for measuring parameters in joint replacement surgery
US10779751B2 (en) * 2013-01-25 2020-09-22 Medtronic Navigation, Inc. System and process of utilizing image data to place a member
US10869724B2 (en) * 2017-03-09 2020-12-22 Smith & Nephew, Inc. Sagittal rotation determination
CN107296651A (en) * 2017-06-21 2017-10-27 四川大学 It is a kind of to digitize the method that auxiliary determines distal femur Osteotomy
CN108765417B (en) * 2018-06-15 2021-11-05 西安邮电大学 Femur X-ray film generating system and method based on deep learning and digital reconstruction radiographic image
CN109567942B (en) * 2018-10-31 2020-04-14 上海盼研机器人科技有限公司 Craniomaxillofacial surgical robot auxiliary system adopting artificial intelligence technology
CN110648337A (en) * 2019-09-23 2020-01-03 武汉联影医疗科技有限公司 Hip joint segmentation method, hip joint segmentation device, electronic apparatus, and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859213A (en) * 2019-01-28 2019-06-07 艾瑞迈迪科技石家庄有限公司 Bone critical point detection method and device in joint replacement surgery

Also Published As

Publication number Publication date
CN111179350A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111179350B (en) Hip joint image processing system
US11045329B1 (en) Acetabular template component and method of using same during hip arthrosplasty
Cabarcas et al. Accuracy of patient-specific instrumentation in shoulder arthroplasty: a systematic review and meta-analysis
US20230414287A1 (en) Systems and methods for preoperative planning and postoperative analysis of surgical procedures
US9508149B2 (en) Virtual 3D overlay as reduction aid for complex fractures
CN111223146B (en) Processing method and computing device for hip joint image
CN111292363B (en) Joint image processing method and device and computing equipment
CN107343817B (en) Computer-aided design orthopedic osteotomy and orthopedic fixation integrated guide plate and manufacturing method thereof
US20160117817A1 (en) Method of planning, preparing, supporting, monitoring and/or subsequently checking a surgical intervention in the human or animal body, apparatus for carrying out such an intervention and use of the apparatus
US20050059873A1 (en) Pre-operative medical planning system and method for use thereof
US20100030231A1 (en) Surgical system and method
US8644909B2 (en) Radiographic imaging method and apparatus
JP2003144454A (en) Joint operation support information computing method, joint operation support information computing program, and joint operation support information computing system
JP2016532475A (en) Method for optimal visualization of bone morphological regions of interest in X-ray images
US11464569B2 (en) Systems and methods for pre-operative visualization of a joint
CN107106239A (en) Surgery is planned and method
WO2022152128A1 (en) Guide plate design method for total hip replacement and related device
US20220183760A1 (en) Systems and methods for generating a three-dimensional model of a joint from two-dimensional images
CA3184178A1 (en) Intraoperative imaging and virtual modeling methods, systems, and instrumentalities for fracture reduction
Gomes et al. Patient-specific modelling in orthopedics: from image to surgery
Shapi'i et al. An automated size recognition technique for acetabular implant in total hip replacement
Murase Morphology and kinematics studies of the upper extremity and its clinical application in deformity correction
Barratt et al. Self-calibrating ultrasound-to-CT bone registration
US20230263498A1 (en) System and methods for calibration of x-ray images
US11957418B2 (en) Systems and methods for pre-operative visualization of a joint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Yiling

Inventor after: Liu Xingyu

Inventor before: Zhang Yiling

Inventor before: Chai Wei

Inventor before: Liu Xingyu

Inventor before: An Yicheng

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 1109, SOHO building, Zhongguancun, No. 8, Haidian North 2nd Street, Haidian District, Beijing 100190

Patentee after: Zhang Yiling

Patentee after: Beijing Changmugu Medical Technology Co.,Ltd.

Patentee after: Changmugu medical technology (Qingdao) Co.,Ltd.

Address before: 1109, Zhongguancun SOHO Building, No. 8 Haidian North 2nd Street, Haidian District, Beijing, China, 100190

Patentee before: Zhang Yiling

Patentee before: BEIJING CHANGMUGU MEDICAL TECHNOLOGY Co.,Ltd.

Patentee before: Changmugu medical technology (Qingdao) Co.,Ltd.