US20190365471A1 - Computer-aided prosthesis alignment - Google Patents
Computer-aided prosthesis alignment Download PDFInfo
- Publication number
- US20190365471A1 US20190365471A1 US16/540,850 US201916540850A US2019365471A1 US 20190365471 A1 US20190365471 A1 US 20190365471A1 US 201916540850 A US201916540850 A US 201916540850A US 2019365471 A1 US2019365471 A1 US 2019365471A1
- Authority
- US
- United States
- Prior art keywords
- prosthesis
- target bone
- representation
- model
- articulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 237
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000011524 similarity measure Methods 0.000 claims description 30
- 239000003086 colorant Substances 0.000 claims description 27
- 230000008859 change Effects 0.000 claims description 22
- 230000000007 visual effect Effects 0.000 claims description 20
- 230000000877 morphologic effect Effects 0.000 claims description 11
- 238000003860 storage Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 8
- 238000011882 arthroplasty Methods 0.000 abstract description 7
- 230000015654 memory Effects 0.000 description 14
- 238000001356 surgical procedure Methods 0.000 description 12
- 210000000689 upper leg Anatomy 0.000 description 12
- 230000000399 orthopedic effect Effects 0.000 description 7
- 210000000588 acetabulum Anatomy 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000005291 magnetic effect Effects 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000002591 computed tomography Methods 0.000 description 4
- 210000003414 extremity Anatomy 0.000 description 4
- 210000001624 hip Anatomy 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000007943 implant Substances 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000002600 positron emission tomography Methods 0.000 description 2
- 238000002603 single-photon emission computed tomography Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013179 statistical model Methods 0.000 description 2
- 210000002303 tibia Anatomy 0.000 description 2
- 238000011541 total hip replacement Methods 0.000 description 2
- 238000002604 ultrasonography Methods 0.000 description 2
- 208000030016 Avascular necrosis Diseases 0.000 description 1
- 206010017076 Fracture Diseases 0.000 description 1
- 206010031264 Osteonecrosis Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 206010003246 arthritis Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000000845 cartilage Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 208000020089 femoral neck fracture Diseases 0.000 description 1
- 210000002436 femur neck Anatomy 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 210000004394 hip joint Anatomy 0.000 description 1
- 238000002513 implantation Methods 0.000 description 1
- 210000001503 joint Anatomy 0.000 description 1
- 210000005067 joint tissue Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000002324 minimally invasive surgery Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 238000002432 robotic surgery Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000003826 tablet Substances 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B5/00—ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
- A61B2034/104—Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
Definitions
- This document relates generally to computer-aided orthopedic surgery, and more specifically to systems and methods for computer-aided alignment and positioning of a prosthesis onto a bone surface.
- Hip resurfacing arthroplasty is a joint replacement procedure where a portion of the joint tissue, such as an articulation surface, is replaced by a resurfacing prosthesis.
- Joint resurfacing arthroplasty has been found to be a viable alternative to the traditional total joint replacement, such as total knee or total hip replacement, in certain patients particularly younger and active patients with relatively strong bones.
- hip resurfacing arthroplasty involves replacing worn cartilage and damaged bone tissue in the acetabulum with a cup-shaped prosthesis liner in the acetabulum.
- the femoral head can be trimmed and reshaped, and a femoral prosthesis cap can be permanently affixed to the trimmed femoral head.
- the prosthesis capped femoral head and the acetabular cup can then be reconnected to restore the function of the hip joint.
- Proper positioning of joint resurfacing components can be crucial to the outcome of prosthesis implantation procedures such as joint resurfacing arthroplasty.
- hip resurfacing arthroplasty acetabular cup needs to be properly positioned onto and aligned with the resurfaced acetabulum before the acetabular cup is pressure-fit into the resurfaced acetabulum.
- the femoral cap must be properly aligned to the trimmed host femoral head before the femoral cap can be cemented into the trimmed femoral head.
- Proper alignment and positioning of resurfacing components onto the respective host bones allows the desirable range and flexibility of joint movement.
- Positioning of prosthesis onto a resurfaced target host bone usually requires a surgeon to mentally map and compare the shape, orientation, and relative positions of the prosthesis components and the target bones.
- Computer-aided tools can be used to assist the surgeon in better aligning the prosthesis components against the target bones.
- These methods and tools can be difficult to operate and may suffer from lack of reliability and certainty.
- some surgical areas can have reduced visualization and access, particularly during minimally invasive surgery and arthroscopic techniques. Identifying the misaligned regions on the prosthesis components and/or on the host target bones can be problematic and imprecise due to the interference of surrounding tissues within the surgical area. Determining and visualizing the correct positions and orientations of the prosthesis with respect to the target bone can be practically difficult. Therefore, the present inventors have recognized that there remains a considerable need for systems and methods that can assist the surgeon in reliably positioning the prosthesis onto the target bone with improved accuracy and consistency.
- a prosthesis positioning and alignment system can include a processor unit and a user interface unit.
- the processor unit can receive a target bone model including a first data set representing a target bone surface, and a prosthesis model including a second data set representing a prosthesis surface.
- the prosthesis is configured to at least partially replace an articulation surface of the target bone.
- the processor unit can generate an articulation interface representation that indicates spatial misalignment between one or more portions of the prosthesis surface and one or more portions of the target bone surface when the prosthesis model is positioned against the target bone model.
- the user interface can include a user input module that receives an indication of a change in position of the target bone model or the prosthesis model.
- the user interface also includes a display module that can display one or more of the target bone model, the prosthesis model, and the articulation interface representation.
- a method embodiment for aligning a prosthesis surface to a target bone surface can comprise the operations of receiving a target bone model which includes a data set representing a target bone surface, a prosthesis model which includes a data set representing a prosthesis surface, and an indication of a position of the target bone model relative to a position of the prosthesis model.
- the method also comprises generating an articulation interface representation, which is indicative of one or more portions of the prosthesis surface being spatially misaligned with one or more portions of the target bone surface when the two models are positioned against each other at certain locations and orientations.
- One or more of the target bone model, the prosthesis model, and the articulation interface representation can be displayed on a display module to provide feedback to a system user such as a surgeon and assist the system user in properly positioning the prosthesis on the target bone.
- a machine-readable storage medium embodiment of the present document can include instructions that, when executed by a machine, cause the machine to receive a target bone model including a first data set representing a target bone surface, a prosthesis model including a second data set representing a prosthesis surface, and an indication of a position of the target bone model relative to a position of the prosthesis model.
- the machine can be caused to generate an articulation interface representation indicative of one or more portions of the prosthesis surface being spatially misaligned with respective one or more portions of the target bone surface when the two models are positioned against each other at certain positions and orientations.
- the instructions can also cause the machine to display one or more of the target bone model, the prosthesis model, and the articulation interface representation on a display module to provide feedback to a system user.
- FIG. 1 is a block diagram that illustrates an example of a prosthesis alignment and positioning system.
- FIG. 2 is a block diagram that illustrates an example of an articulation interface generator in a prosthesis alignment and positioning system.
- FIGS. 3A-D illustrate examples of articulation interface representations between a femoral surface and a prosthesis surface at one position of the prosthesis.
- FIGS. 4A-D illustrate examples of articulation interface representations between a femoral surface and a prosthesis surface at another position of the prosthesis.
- FIG. 5 is a flowchart that illustrates an example of a method for aligning a prosthesis surface to a target bone surface.
- FIG. 6 is a flowchart that illustrates an example of a method for generating an articulation interface representation.
- FIG. 7 is a block diagram that illustrates an example of a computer system within which instructions for causing the computer system to perform prosthesis alignment may be executed.
- Disclosed herein are systems, devices and methods for computer-aided positioning and alignment of a prosthesis component onto a target bone.
- Various embodiments described herein can help improve the efficacy and the reliability in osteoplasty planning, such as in an orthopedic implant surgery.
- the methods and devices described herein can also be applicable to planning surgery of pathological bones under various other conditions.
- FIG. 1 is a block diagram that illustrates an example of a prosthesis alignment system 100 for use in an orthopedic surgery on a target bone.
- the system 100 includes a processor unit 110 and a user interface unit 120 .
- the system 100 can be configured to generate, and present to a system user, a representation of an articulation interface between a prosthesis component and a target bone shaped to host the prosthesis component.
- the system 100 can also provide the system user with information including degree of alignment between the prosthesis and the target bone.
- the processor unit 110 can include a user input receiver 111 , a model receiver module 112 , an articulation interface generator 113 , and an alignment index calculator 114 .
- the user input receiver 111 can receive a target bone model such as a first data set representing a target bone surface.
- the target bone surface can be an articulation surface of the target bone, such as acetabular surface, a surface of a proximal or distal extremity of a femur, a surface of a proximal or distal extremity of a tibia, or a surface of any other bone in a body.
- the target bone model can include a medical image, a point cloud, a parametric model, or other morphological description of the target bone.
- the medical images can include two-dimensional (2D) or three-dimensional (3D) images.
- the medical images include an X-ray, an ultrasound image, a computed tomography (CT) scan, a magnetic resonance (MR) image, a positron emission tomography (PET) image, a single-photon emission computed tomography (SPECT) image, or an arthrogram.
- the target bone model can include shape data, appearance data, or data representing other morphological characteristics of the target bone surface.
- the shape data may include geometric characteristics of a bone such as landmarks, surfaces, boundaries of 3D images objections.
- the appearance data may include both geometric characteristics and intensity information of a bone.
- the user input receiver 111 can be coupled to the user interface unit 120 , such as via a user input module 121 , to receive the target bone model.
- the user input module 121 can receive the target bone model from a patient database.
- the user input module 121 can alternatively be coupled to an imaging system or other image acquisition module within or external to the system 100 .
- the imagining system or the image acquisition module can feed the target bone model (e.g., one or more images or point clouds) to the system 100 via the user input module 121 .
- the model receiver module 112 can receive a prosthesis model such as a second data set representing a prosthesis surface.
- the prosthesis is configured to at least partially replace the articulation surface of the target bone when the prosthesis surface is aligned with the articulation surface.
- the prosthesis model can include information such as shape or appearance of the prosthesis surface.
- the prosthesis model can be in a form of a parametric model, a statistical model, a shape-based model, a volumetric model, an elastic model, a geometric spine model, or a finite element model.
- the target bone model received from the user input module 121 has data format or modality comparable to the prosthesis model received from the model receiver module 112 .
- the user input module 121 receives a 3D graphical representation such as a medical image of the surface of a femoral head.
- the model receiver module 112 receives a 3D graphical representation such as a computer-simulated image of the prosthesis surface.
- the articulation interface generator 113 coupled to the model receiver module 112 and the user input receiver 111 , is configured to generate an articulation interface representation using both the target bone surface representation and the prosthesis surface representation.
- the articulation interface representation can be indicative of one or more portions of the prosthesis surface being spatially misaligned with one or more portions of the target bone surface when the two models are positioned against each other in specified positions.
- the articulation interface representation can include a color-coded representation, an annotative representation, or other formats of representations or overlays of two or more different representations.
- the articulation interface representation provides feedback to the system user such as a surgeon, and assists the system user in properly positioning the prosthesis on the target bone. Examples of the articulation interface generator 113 are discussed below, such as with reference of FIG. 2 .
- the alignment index calculator 114 can be configured to calculate an alignment index using the articulation interface representation.
- the alignment index can be a measure of an overall disconformity between the target bone surface and the prosthesis surface.
- L can be computed as a square-root of the sum of squared regional similarity measures among all segments, that is:
- the alignment index can also include statistical of disconformity measurements among various regions of the articulation interface representation.
- the alignment index can include maximum, minimum, average, median, range, histogram, or spatial distributions of the disconformities among multiple regions of the target bone surface and the prosthesis surface.
- the alignment index calculator 114 can further determine that a desirable alignment between the prosthesis surface and the target bone surface has been achieved when the alignment index meets a specified criterion.
- the user interface unit 120 can include a user input module 121 and a display module 122 .
- the user input module 121 can receive a target bone model including a representation of the target bone surface and provide the target bone model to the processor unit 110 through the user input receiver 111 .
- the user input module 121 can be communicatively coupled to an external module for generating or storing the target bone model.
- the user input module 121 can include one or more pointing devices, such as a mouse, a trackball, or a touch screen, that are connected to the user interface unit 120 .
- the pointing device enables system users to issue user commands to the system 100 .
- the user input module 121 can receive a user command that selectively alters one or more properties of the target bone model or one or more properties of the prosthesis model. Examples of user command includes translating, rotating, reflecting, scaling, stretching, shrinking, or performing any other manipulations or combinations of the manipulations over one or both of the target bone model or the prosthesis model.
- the user command can also include a plurality of specified angles of views or projections of the target bone model, the prosthesis model, or the articulation interface representation.
- the user command can further include selection, deselection, change in position, or change in orientation of either or both of the target bone model and the prosthesis model.
- the user input module 121 can receive a change of position of the selected model relative to the other model.
- the user input module 121 can receive a concurrent change of positions of both models with preserved relative position between the two models.
- the processor unit 110 can update the presentations of target bone model, the prosthesis model, or the articulation interface representation.
- the articulation interface generator 113 can re-generate an articulation interface representation if there is a change of relative positions or orientations between the target bone surface representation and the prosthesis surface representation.
- the articulation interface generator 113 can perform spatial transformation of the articulation interface representation in response to user command of translating or rotating a model, or projecting a model from a different specified angle.
- the user input module 121 can receive a command of animating an image sequence of the target bone model, the prosthesis model, or the articulation interface representation.
- the command of animation can include parameters controlling the speed (e.g., frame rate), direction, and effects of motion of one or both of the models.
- the animation can show the relative positions or orientations of one model (e.g., the target bone model) with respect to the other model (e.g., the prosthesis model).
- the display module 122 coupled to the user input module 121 and the processor unit 110 , can be configured to display the target bone model, the prosthesis model, and the articulation interface representation.
- the display module 122 can generate an articulation interface representation overlaid with one or both of the target bone model, the prosthesis model. Examples of the display module 122 are discussed below, such as with reference of FIGS. 3A-D and FIGS. 4A-D .
- FIG. 2 is a block diagram that illustrates an example of an articulation interface generator 113 .
- the articulation interface generator 113 can include a misalignment calculator 210 and an articulation interface representation generator 220 .
- the misalignment calculator 210 coupled to the user input receiver 111 and the model receiver module 112 , can calculate a regional misalignment measure ( ⁇ ) between the target bone surface representation (X) and the prosthesis surface representation (Y), when the prosthesis model is positioned against the target bone model at a specified position.
- the misalignment calculator 210 can include a segmentation module 211 and a regional similarity calculation module 212 .
- the correspondence between segments X(i) and Y(i) can be established based on their similar locations or morphologies on their respective surface representations.
- the segments ⁇ X(i) ⁇ and ⁇ Y(i) ⁇ can be created with pre-determined size and shape irrespective of the anatomical or morphologic structures of the target bone surface or the prosthesis surface.
- the segments can be sized to contain a specified number of pixels of the respective digital images.
- the size of the segments can determine the spatial resolution of misalignment between the two surface representations X and Y. The smaller the segments, the more of the segments created on the surface representations X or Y; and therefore a finer spatial resolution of the misalignment between X and Y.
- the segments ⁇ X(i) ⁇ and ⁇ Y(i) ⁇ can be created using information including anatomical or morphologic structures of the target bone surface X or the prosthesis surface Y.
- the segments thus created can have non-uniform sizes or shapes, and the misalignment can therefore have a non-uniform spatial resolution of the spatial misalignment is non-uniform as well.
- the regional similarity calculation module 212 can calculate regional similarity measure D(X(i), Y(i)).
- D(X(i), Y(i)) is a quantitative measure of the regional misalignment in shapes, morphologies, or topologies between the segments X(i) and Y(i).
- D(X(i), Y(i)) can be computed using features extracted from the segments X(i) and Y(i). Examples of features can include location such as coordinates in a coordinated system, an orientation, a curvature, a contour, a shape, an area, a volume, or other geometric or volumetric parameters.
- the features can also include one or more intensity-based parameters.
- the features can be extracted in the space domain, frequency domain, or space-frequency domain.
- the features may include statistical measurements derived from the geometric or intensity-based parameters, such as the mean, median, mode, variance, covariance, and other second or higher order statistics.
- different regional similarity measure D(X(i), Y(i)) can be used.
- the extracted features are coordinates of X(i) and Y(i) in a common coordinated system.
- the regional similarity measure D(X(i), Y(i)) can be computed as one of L1 norm, L2 norm (Euclidian distance), infinite norm, or other distance measurements in the normed vector space.
- the regional similarity measure D(X(i), Y(i)) can also be computed as signed distance between X(i) and Y(i) along a particular direction in the coordinated system.
- the sign of D(X(i),Y(i)) can indicate the relative position of Y(i) with respect to X(i). For example, the distance is positive if Y(i) is above X(i) by at least a specified amount, or negative if Y(i) is below X(i) by at least a specified amount.
- the extracted features are intensity-based features; and the similarity measure D(X(i), Y(i)) can be computed as correlation coefficient, mutual information, or ratio image uniformity.
- the articulation interface representation generator 220 can generate an articulation interface representation (S) using the regional similarity measure D(X(i), Y(i)).
- the articulation interface representation (S) can be a computer-generated 2D or 3D image with a size comparable to the size of the target bone surface or the size of the prosthesis surface.
- the articulation interface representation (S) can be either one or a combination of a color-coded representation 221 or an annotative representation 222 .
- a specified color C(i) can be rendered to segment S(i) of articulation interface representation according to the similarity measure D(X(i), Y(i)), such as when D(X(i), Y(i)) exceeds a specified threshold or falls within a specified range.
- the color-coded representation 221 can include at least a first color C a and a different second color C b .
- C a and C b can be two colors having different hues, such as green and red.
- the color C a can be used to denote a misalignment in a first direction between a portion of the prosthesis surface and a corresponding portion of the target bone surface.
- the color C b can be used to denote a misalignment in a different second direction between a portion of the prosthesis surface and a corresponding portion of the target bone surface.
- C a is rendered to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the segment of the prosthesis surface Y(i) is above the corresponding segment of the target bone surface X(i) by at least a specified amount; and C b is rendered to segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment of the prosthesis surface Y(j) is below the corresponding segment of the target bone surface X(j) by at least a specified amount.
- the color-coded representation 221 can additionally include a third color C 0 different from the first color C a and the second color C b , such as a color having different hue than C a or C b .
- C 0 can be rendered to a segment S(k) of the articulation interface representation if the segment of the prosthesis surface Y(k) is within a specified range relative to, and not substantially misaligned with, the segment of the target bone surface X(k). For example, when ⁇ D(X(k), Y(k)) ⁇ is smaller than a specified threshold, the segments X(k) and Y(k) are regarded as substantially aligned to each other; and color C 0 can be applied to the articulation interface segment S(k).
- the color-coded representation 221 can include a first set of colors ⁇ C a 1 , C a 2 , . . . , C a P ⁇ and a different second set of colors ⁇ C b 1 , C b 2 , . . . , C b Q ⁇ .
- the first set ⁇ C a 1 , C a 2 , . . . , C a P ⁇ can differ from the second set ⁇ C b 1 , C b 2 , . . . , C b Q ⁇ by some easily identifiable characteristic such as hues.
- the colors within a set can share a common and easily identifiable characteristic such as having the same hue, but differ from each other in at least one color parameter such as saturation or brightness.
- the first set includes green colors with different saturation or brightness ⁇ G 1 , G 2 , . . . , G Q ⁇
- the second set includes red colors with different saturation or brightness ⁇ R 1 , R 2 , . . . , R P ⁇ .
- the multiple colors in a color set can be used to differentiate various degrees of misalignment on a particular direction.
- Each of the first set of colors such as C a p , indicates Y(i) being misaligned in a first direction with X(i) by a specified amount.
- each of the second set of colors such as C b q , indicates Y(i) being misaligned in a second direction with X(i) by a specified amount.
- the degree of misalignment can be represented by the magnitude of D(X(i), Y(i)), that is, ⁇ D(X(i), Y(i)) ⁇ .
- Color C a p is rendered to segment S(i) if D(X(i), Y(i)) is positive and ⁇ D(X(i), Y( 0 ) ⁇ is within a first specified range.
- color C b q is rendered to segment S(j) if D(X(j), Y(j)) is negative and ⁇ D(X(i), Y(i)) ⁇ is within a second specified range.
- a specified annotation A(i) can be applied to a segment of the articulation interface representation, S(i), according to the similarity measure D(X(i), Y(i)).
- A(i) can be in one or a combination of different forms including signs, labels, lines, texts, or any other markings.
- a first annotation A a can be applied therein if a portion of the prosthesis surface misaligns in a first direction with a corresponding portion of the target bone surface, or a second different annotation A b can be applied if a portion of the prosthesis surface misaligns in a second direction with a corresponding portion of the target bone surface.
- a a can be applied to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the prosthesis surface Y(i) is above the corresponding segment of the target bone surface X(i) by at least a specified amount.
- a b can be applied to segment S(j) if D(X(j), Y(j)) is negative, which indicates that the prosthesis surface Y(j) is below the corresponding segment of the target bone surface X(j).
- the annotative representation 222 can include a first set of annotations ⁇ A a 1 , A a 2 , . . . , A a P ⁇ and a different second set of annotations ⁇ A b 1 , A b 2 , . . . , A b Q ⁇ to differentiate various degrees of misalignment on a particular direction.
- the first set ⁇ A a 1 , A a 2 , . . . , A a P ⁇ can differ from the second set ⁇ A b 1 , A b 2 , . . . , A b Q ⁇ by some easily identifiable characteristic such as labels or signs.
- annotations within a set can share a common and easily identifiable characteristic such as having the same labels or signs (e.g., a “+” label), but differ from each other in at least one characteristic such as font, size, or weight of the labels.
- An annotation can be selected from the first or the second set and applied to segment S(i) according to D(X(i), Y(i)).
- an annotation A a p e.g., a size-10 “+” label
- a different annotation A b q (e.g., a size-8 “ ⁇ ” label) can be rendered to a segment S(j) if D(X(j), Y(j)) is negative and ⁇ D(X(i), Y(i)) ⁇ is within a specified range.
- the same annotation can be applied to the entire region of segment S(i), or a partial region of S(i) such as borders of S(i) between S(i) and its neighboring segments.
- the annotation is applied to the borders between the regions having different directions of misalignment, such as the borders between a region of positive misalignment (i.e., prosthesis surface is above the target bone surface) and a neighboring region of negative misalignment (i.e., prosthesis surface is below the target bone surface).
- the segment S(i) may be rendered a blend of color-coded representation 221 and annotative representation 222 .
- a segment S(i) can be green with annotative markings “+” on the borders of S(i).
- the segment S(i) can be filled with green-colored labels “+” on the entirety of S(i).
- articulation interface representation generator 220 is shown to include one or both of color-coded representation 221 and annotative representation 222 are illustrated in FIG. 2 , other visual representations can also be included in the articulation interface representation for the purpose of differentiating various spatial relationships between the segments Y(i) and X(i).
- the color-coded representation 221 can be augmented or substituted with a pattern, a texture, or an effect such as shadow, edge enhancement, lightning, or gradient.
- FIGS. 3A-D illustrate examples of an articulation interface representation 350 between a femoral surface and a prosthesis surface, when a prosthesis model 320 is positioned against a distal femur model 310 at a first position.
- the distal femur model 310 includes a femoral surface representation 312
- the prosthesis model 320 includes a prosthesis surface representation 322 .
- the models 310 and 320 , the surface representations 312 and 322 , and the articulation interface representation 350 can be generated before and during the surgery and prosthesis placement, and displayed on a monitor, such as by using the prosthesis alignment system 100 or its various embodiments discussed in this document.
- FIG. 3A illustrates the position and orientation of the prosthesis model 320 relative to the femoral model 310 , when the two models are positioned against each other.
- the femoral model 310 and the femoral surface representation 312 each includes a data set of shape data, appearance data, or data representing other morphological characteristics of the distal femur or the femoral surface, respectively.
- the shape data may include geometric characteristics such as landmarks, surfaces, boundaries of three-dimensional images objections.
- the appearance data may include both geometric characteristics and intensity information.
- the prosthesis model 320 and the prosthesis surface representation 322 each includes a data set data having a data format or modality comparable to the distal femur model 310 and the femoral surface representation 312 .
- the femoral model 310 and the femoral surface representation 312 are represented as computer-simulated 2D contour models of the distal femur and the femoral surface.
- the prosthesis model 320 and the prosthesis surface representation 322 are represented as computer-simulated 2D contour models.
- the position of the prosthesis model 320 relative to the femoral model 310 , or the position of the prosthesis surface representation 322 relative to the femoral surface representation 312 can be described using angles of flexion or extension. A wider angle indicates a higher degree of misalignment between various portions of the prosthesis surface representation 322 and the corresponding portions of the femoral surface representation 312 . As illustrated in FIG.
- region 340 A where the portion 322 A of the prosthesis surface representation is above or “outside” the corresponding portion 312 A of the femoral surface representation
- region 340 B where the portion 322 B of the prosthesis surface representation is below or “inside” the corresponding portion 312 B of the femoral surface representation
- regions 340 C where the portion 322 C of the prosthesis surface representation is substantially aligned with or “matches” the portion 312 C of the femoral surface representation.
- FIGS. 3B-D illustrate different angles of views of the distal femur model 310 , the prosthesis model 320 positioned against the distal femur model 310 , and the articulation interface representation 350 disposed over the prosthesis model 320 .
- FIG. 3B illustrates a top view
- FIG. 3C illustrates a side view
- FIG. 3D illustrates a view at a specified angle.
- the articulation interface representation 350 is a color-coded representation that includes three base colors to represent three types of relative positions between various portions of the prosthesis surface representation 322 and the corresponding portions of the femoral surface representation 312 .
- the articulation interface representation 350 can be a black-and-white or grayscale representation. In a grayscale representation, various shades of gray would be used to illustrate the articulation interface representation.
- FIGS. 3B-D are shown in grayscale for ease of viewing.
- a first color is rendered to region 354 to indicate the portion 322 A of the prosthesis surface representation is above or “outside” the corresponding portion 312 A of the femoral surface representation.
- the first color can include a dark shade pattern.
- the first color can include green.
- a second color is rendered to region 352 to indicate the portion 322 B of the prosthesis surface representation is below or “inside” the corresponding portion 312 B of the femoral surface representation.
- the second color can include a spotted pattern.
- the second color can include red.
- a third color is rendered to region 356 to indicate the portion 322 C of the prosthesis surface representation is substantially aligned with the corresponding portion 312 C of the femoral surface representation.
- the third color can include a dotted pattern.
- the third color can include gray.
- different colors, shades, textural representations, grays, black and white coloring, or the like can be used to represent different regions.
- variations of the base color rendered to that region can be applied to sub-regions within the colored region to differentiate variations in regional similarities such as the distance between the portions of the articulation surface and the portions of the femoral surface along a particular direction in the coordinated system.
- variations in regional similarities such as the distance between the portions of the articulation surface and the portions of the femoral surface along a particular direction in the coordinated system.
- the distance between 322 A and 312 A at different sub-regions varies.
- the first color with higher saturation or lower brightness e.g., darker
- the articulation interface representation 350 includes three annotations that represent three types of relative positions between various portions of the prosthesis surface representation 322 and the corresponding portions of the femoral surface representation 312 . As illustrated in FIGS. 3B-D , the three annotated regions 352 , 354 and 356 indicate the prosthesis surface representation is below, above, or substantially aligned with the corresponding portion of the femoral surface representation, respectively.
- FIG. 4A-D illustrate examples of articulation interface representation 450 between a femoral surface 312 and a prosthesis surface 322 , when a prosthesis model 320 is positioned against a distal femur model 310 at a second position different than the first position as shown in FIGS. 3A-D .
- the change from the position as shown in FIGS. 3A-D to a different position as shown in FIGS. 4A-D can be achieved by processing a user command such as via the user input module 121 .
- the user command can also selectively scale, translate, rotate, or perform any combinations of manipulations over one or both of the target bone model 310 and the prosthesis model 320 .
- the flexion angle shown in FIG. 4A is wider, and the prosthesis surface representation 322 is more misaligned with the femoral surface representation 312 .
- the resulting three regions of misalignment 440 A-C therefore have different sizes and shapes than the regions 340 A-C. For example, there is a higher degree of misalignment in the “above” direction between the surfaces 422 A and 412 A in regions 440 A than between the surfaces 322 A and 312 A in regions 340 A. Likewise, there is a higher degree of misalignment in the “below” direction between the surfaces 422 B and 412 B in region 440 B than between the surfaces 322 B and 312 B in region 340 B. Additionally, the substantial alignment region 440 C has a smaller size than the region 340 A.
- the articulation interface representation 450 can be a black-and-white or grayscale representation.
- the FIGS. 4B-4D are shown in grayscale for ease of viewing. Consistent with the changes in regional misalignment shown in FIG. 4A , FIGS. 4B-D each represent the changes in size and shape of a second colored region 452 corresponding to region 440 B, a first colored region 454 corresponding to region 440 A, and a third colored region 456 corresponding to region 440 C.
- the first, second, and third colors can include a dark shade pattern, a spotted pattern, and a dotted pattern, respectively.
- the first, second, and third colors can include green, red, and gray, respectively.
- different colors, shades, textural representations, grays, black and white coloring, or the like can be used to represent different regions.
- FIGS. 4B-D are views from different angles of the articulation interface representation 450 , each including three annotations that represent three types of relative positions (above, below, or substantial alignment) between various portions of the prosthesis surface representation 322 and the corresponding portions of the femoral surface representation 312 .
- Variations of the base color can be applied to the sub-regions of the regions 452 , 454 or 456 to differentiate within-region differences in alignment.
- base colors with different amount of saturation or brightness can be used according to the similarity such as the distance between the portions of the surface 312 and the corresponding portions of surface 322 along a particular direction in a coordinated system. For example, corresponding to greater distance between 422 A and 412 A in region 440 A compared to that between 322 A and 312 A in region 340 A, a wider sub-region within the region 454 can be rendered the first color with higher saturation or lower brightness (e.g., darker).
- a wider sub-region within the region 452 can be rendered the second color with lower saturation or higher brightness (e.g., brighter).
- the articulation interface representation 350 or 450 is a black-and-white representation
- variations of the annotation can be applied to the sub-regions of the regions 452 , 454 or 456 to differentiate within-region differences in alignment. For example, as illustrated in FIGS. 3B-D and FIGS.
- a darker color, or a pattern with markings (e.g., dots) with a higher density can be used to indicate greater distance between the portions of the surface 312 and the corresponding portions of surface 322 along a particular direction in a coordinated system.
- FIG. 5 is a flowchart that illustrates an example of a method 500 for aligning a prosthesis surface to a target bone surface.
- the method 500 can be used in orthopedic surgeries such as joint resurfacing arthroplasty.
- the system 100 including its various embodiments discussed in this document, can perform method 500 , including its various embodiments discussed in this document.
- the method 500 begins with receiving a target bone model at 510 , such as by using a model receiver module 112 .
- the target bone model can include an articulation surface of the target bone.
- Examples of the target bone can include an acetabulum, a proximal or distal extremity of a femur, a proximal or distal extremity of a tibia, or any other bone in a body.
- the target bone can be surgically prepared to host a prosthesis component. At least a portion of the target bone, such as the articulation surface, undergoes surgical alteration, repair, or resection, such that the prosthesis can be securely placed against the target bone to replace the articulation surface of the target bone.
- the target bone model can include a data set characterizing geometric characteristics including position, shape, contour, or appearance of the target bone surface.
- the data set can also include intensity information.
- the target bone model can include at least one medical image such as an X-ray, an ultrasound image, a computed tomography (CT) scan, a magnetic resonance (MR) image, a positron emission tomography (PET) image, a single-photon emission computed tomography (SPECT) image, or an arthrogram, among other 2D or 3D images.
- CT computed tomography
- MR magnetic resonance
- PET positron emission tomography
- SPECT single-photon emission computed tomography
- arthrogram among other 2D or 3D images.
- the target bone model can be received from a patient database, or from an imaging system or an image acquisition system.
- the target bone model is calculated intra-operatively by collecting a cloud of points from the target bone using an optically tracked stylus (pointer) or similar device.
- the surface of the target bone is obtained by a surgeon prior to enable the computing system to calculate a target bone model from the collected points.
- multiple methods of creating a target bone model can be combined, such as fitting a database model to an actual bone using collected points.
- a prosthesis model can be received such as by using the user input module 121 .
- the prosthesis model includes a data set representing a surface of a prosthesis, which is sized, shaped or configured to at least partially replace the articulation surface of the target bone.
- the prosthesis model can be in a format of a parametric model, a statistical model, a shape-based model, a volumetric model, an elastic model, a geometric spine model, or a finite element model.
- the prosthesis model has a data format or modality comparable to the target bone model.
- information of relative positions between the target bone model and the prosthesis model is received, such as via a user input module that enables a system user to interactively select or deselect one or both of the target bone model or the prosthesis model, and alter one or more properties thereof.
- the received relative positions can include indication of a position of the prosthesis surface representation relative to the target bone surface representation when the prosthesis surface is positioned against the target bone surface.
- the position of the target bone surface representation and the position of the prosthesis surface representation can be characterized by their respective coordinates in a common coordinated system.
- an indication of a change in positions of one or both of target bone surface representation and the prosthesis surface representation can be received.
- other properties of the models that can be altered include translation, rotation, reflection, scaling, stretching, shrinking, or any other manipulations or any combination of the manipulations over one or both of the target bone model and the prosthesis model.
- manipulations of the prosthesis model such as scaling, stretching or shrinking, are restricted based on the actual prosthesics available for implant.
- an articulation interface representation is generated, such as by using the articulation interface generator 113 .
- the articulation interface representation indicates spatial misalignment between one or more portions of the prosthesis surface and the respective one or more portions of the target bone surface, when the two models are positioned against each other along a specified direction.
- the articulation interface representation (S) can have similar data format as the target bone surface representation (X) or the prosthesis surface representation (Y).
- the articulation interface representation (S) can be a computer-generated 2D or 3D image with a size comparable to the size of the target bone surface representation or the size of the prosthesis surface representation.
- the articulation interface representation can include a color-coded representation, an annotative representation, or other markup representations or overlays of two or more different representations to assist the surgeon in reliably positioning the prosthesis onto the target bone with improved accuracy and consistency.
- An alignment index can also be computed using the articulation interface representation. Example methods of generating the articulation interface representation are discussed below, such as with reference of FIG. 6 .
- one or more of the target bone model, the prosthesis model, and the articulation interface representation can be displayed such as on a monitor or other display module.
- the graphical representation of one or both of the target bone model or the prosthesis model can be overlaid with the articulation interface representation.
- the display module or the monitor can also interactively display the user commands such as selecting, positioning, scaling, translating, rotating, reflecting, stretching, shrinking, or any other manipulations or any combination of the manipulations over one or more the target bone model, the prosthesis model, or the articulation interface representation.
- Also displayed can include a plurality of specified angles of views or projections of the target bone model, the prosthesis model, or the articulation interface representation.
- the computed alignment index can also be displayed, which can be used by the system user to adjust relative positions of the prosthesis surface representation and the target bone surface representation to achieve a desired alignment.
- FIG. 6 is a flowchart that illustrates an example of a method 600 for generating an articulation interface representation and determining a desirable alignment between the prosthesis model and the target bone model.
- the method 600 can be an example of the articulation interface generation process 540 illustrated in FIG. 5 .
- regional misalignments ( ⁇ ) between various regions of the two surfaces can be calculated at 610 , such as by using the misalignment calculator 210 .
- the regional misalignment ( ⁇ ) can be a quantified representation of the discrepancies between the shapes, morphologies, or topologies of multiple regions created on the target bone surface representation (X) and the prosthesis surface representation (Y).
- segments ⁇ X(i) ⁇ and ⁇ Y(i) ⁇ can be created with pre-determined size and shape irrespective of the anatomical or morphologic structures of the target bone surface or the prosthesis surface. For example, when the target bone model (X) and the prosthesis model (Y) are each represented by digital images, segments ⁇ X(i) ⁇ and ⁇ Y(i) ⁇ can be sized to contain a specified number of pixels of the respective digital images.
- D(X(i), Y(i)) can be extracted from each target bone surface segment X(i) and the corresponding prosthesis surface segment Y(i), and a regional similarity measure D(X(i), Y(i)) between the segments X(i) and Y(i) can be computed.
- D(X(i), Y(i)) can include L1 norm, L2 norm (Euclidian distance), infinite norm, or other distance-based measures in a vector space; correlation coefficient, mutual information, or ratio image uniformity.
- D(X(i), Y(i)) can also be computed as signed distance between X(i) and Y(i) along a particular direction in the coordinated system.
- the sign of D(X(i),Y(i)) can indicate the relative position of Y(i) with respect to X(i) when Y(i) and X(i) are positioned against each other. For example, the distance is positive if Y(i) is above X(i) by at least a specified amount, and negative if Y(i) is below X(i) by at least a specified amount.
- a visual representation of an articulation interface can be generated using the calculated regional misalignments.
- the visual representation can include a color-coded representation, an annotative representation, or a combination of the two such as a colored annotation.
- a specified color, C(i) can be rendered to a segment of the articulation interface representation, S(i), if the similarity measure D(X(i), Y(i)) meets a specified criterion, such as exceeding a specified threshold or falling within a specified range.
- a first color C a e.g., a green color
- D(X(i), Y( 0 ) is positive, which indicates that the segment Y(i) is misaligned in a first direction relative to the corresponding segment X(i), such as being positioned above X(i) by at least a specified amount.
- a different second color Cb (e.g., a red color) can be rendered to a segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment Y(j) is in a different second direction relative to the corresponding segment X(j), such as being positioned below X(j) by at least a specified amount.
- a third color C 0 (e.g., a gray color), different from C a and C b , can be rendered to a segment S(k) if the magnitude of D(X(j), Y(j)), ⁇ D(X(i), Y(i)) ⁇ , is below a specified threshold, which indicates that the prosthesis surface segment Y(k) is substantially aligned with the corresponding target bone surface segment X(k).
- a set of colors ⁇ C a 1 , C a 2 , . . . , C a P ⁇ can be used to represent various degrees of misalignment in the first direction between X(i) and Y(i), such as when Y(i) is positioned above X(i).
- a different set of colors ⁇ C b 1 , C b 2 , C b Q ⁇ can be used to represent various degrees of misalignment in the second direction between X(i) and Y(i), such as when Y(i) is positioned below X(i).
- a correspondence, such as a lookup table, between the multiple colors in the color sets and the value ranges of D(X(j), Y(j)) can be established.
- the similarity measure D(X(i), Y(i)) for a segment S(i) can be compared to multiple threshold values or the range values, and one color can be selected from the color sets ⁇ C a 1 , C a 2 , . . . , C a P ⁇ or ⁇ C b 1 , C b 2 , . . . , C b Q ⁇ , and rendered to the segment S(i).
- the color-coded representation can be augmented or substituted by a pattern, a texture, or a style with effects such as shadow, edge enhancement, lightning, or gradient.
- a specified annotation such as one or a combination of different forms of annotations including signs, labels, lines, texts, or any other markings, can be rendered to an articulation interface representation segment S(i) according to the similarity measure D(X(i), Y(i)).
- a first annotation A a can be rendered to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the prosthesis surface segment Y(i) is above the corresponding target bone surface segment X(i) by at least a specified amount.
- a different second annotation Ab can be rendered to a different segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment Y(j) is below the segment X(j) by at least a specified amount.
- a third annotation A 0 can be rendered to a segment S(k) if ⁇ D(X(i), Y(i)) ⁇ is below a specified threshold, indicating the prosthesis surface segment Y(k) is substantially aligned with the target bone surface segment.
- annotations in a annotation set can be used to differentiate various degrees of misalignment between segment Y(i) and X(i), such as by comparing D(X(i), Y(i)) to multiple threshold values or range values, a similar method as discussed above in color-coded representations.
- a user input can be received such as via a user input module.
- the user input can include an indication of selecting or deselecting one or more of the target bone surface representation, the prosthesis surface representation, or the articulation interface representation.
- the user input can also include change in positions of one or more of target bone surface representation, the prosthesis surface representation, or the articulation interface representation.
- Relative position change between the target bone model and the prosthesis model is then detected at 640 .
- both the target bone model and the prosthesis model are selected, such that their positions are concurrently changed while the relative positions or orientations between them are preserved.
- only one of the target bone model or the prosthesis model is selected and repositioned, and a relative position change is detected at 640 .
- the alignment index can be a measure of an overall disconformity between the target bone surface representation and the prosthesis surface representation.
- L can be computed as a square-root of the sum of squared regional similarity measures among all segments.
- the alignment index can also include statistical measurements of disconformities such as maximum, minimum, average, median, range, histogram, or other distributional representations of the disconformities among multiple regions of the target bone surface and the articulation interface.
- the user input detected at 630 can also include an indication of the angles of view or projections of the target bone model, the prosthesis model, or the articulation interface representation.
- an indication of the angles of view or projections of the target bone model, the prosthesis model, or the articulation interface representation For example, as illustrated in FIGS. 3A-D and FIGS. 4A-D , multiple angles of views of a computer-generated 3D image of the distal femur model and the associated femoral surface representation can be generated and displayed, including a top view, a side view, a front view, or and at any perspective obtained after specified rotation. If no change of viewing angle is detected at 660 , then the generated articulation interface representation can be displayed at 550 .
- the articulation interface representation can be transformed at 670 in accordance with the change of angle of view.
- the transformed articulation interface can be displayed at 550 .
- FIG. 7 is a block diagram that illustrates an example of a machine in the form of a computer system 700 within which instructions, for causing the computer system to perform any one or more of the methods discussed herein, may be executed.
- the machine can operate as a standalone device or may be connected (e.g., networked) to other machines.
- the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA personal digital assistant
- STB set-top box
- PDA personal digital assistant
- cellular telephone a web appliance
- web appliance a web appliance
- network router switch or bridge
- machine any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- the example computer system 700 includes a processor 702 (such as a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 704 and a static memory 706 , which communicate with each other via a bus 708 .
- the computer system 700 may further include a video display unit 710 (such as a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alpha-numeric input device 712 (such as a keyboard), a user interface (UT) navigation device (or cursor control device) 714 (such as a mouse), a disk drive unit 716 , a signal generation device 718 (e.g., a speaker) and a network interface device 720 .
- a processor 702 such as a central processing unit (CPU), a graphics processing unit (GPU), or both
- main memory 704 such as a main memory 704
- static memory 706 which communicate with each other via a bus 708 .
- the computer system 700 may
- the disk drive unit 716 includes a machine-readable storage medium 722 on which is stored one or more sets of instructions and data structures (e.g., software) 724 embodying or used by any one or more of the methods or functions described herein.
- the instructions 724 may also reside, completely or at least partially, within the main memory 704 , static memory 706 , and/or within the processor 702 during execution thereof by the computer system 700 , the main memory 704 and the processor 702 also constituting machine-readable media.
- the instructions 724 stored in the machine-readable storage medium 722 include instructions causing the computer system 700 to receive target bone model such as a first data set representing a target bone surface, and to receive a prosthesis model such as a second data set representing a prosthesis surface.
- the prosthesis component can be configured to be positioned against the target bone such as to partially replace the articulation surface of the target bone.
- the instructions 724 can also store the instructions 724 that cause the computer system 700 to generate an articulation interface representation such as one or a combination of a color-coded representation, an annotative representation, or other markup representations or overlays of two or more different representations using both the target bone surface representation and the prosthesis surface representation, and calculate an alignment index that represents overall disconformity between the target bone surface and the articulation interface.
- the machine-readable storage medium 722 may further store the instructions 724 that cause the computer system 700 to receive a user input including an indication of a change in position of one or both of the prosthesis surface model or the target bone surface model, and to update the articulation interface representation in response to the user input including the indication of the change in relative positions between the prosthesis surface model or the target bone surface model.
- the instructions in the machine-readable storage medium 722 may also cause the computer system 700 to generate representation illustrating one or more of the target bone model, the prosthesis model, and the articulation interface representation.
- machine-readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures.
- the term “machine-readable storage medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methods of the present invention, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
- the term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
- machine-readable media include non-volatile memory, including by way of example, semiconductor memory devices (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- semiconductor memory devices e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)
- flash memory devices e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)
- flash memory devices e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)
- flash memory devices e.g., erasable programmable read-only memory (EPROM),
- machine-readable medium and machine-readable storage medium are applicable even if the machine-readable medium is further characterized as being “non-transitory.”
- any addition of “non-transitory,” such as non-transitory machine-readable storage medium is intended to continue to encompass register memory, processor cache and RAM, among other memory devices.
- the instructions 724 may further be transmitted or received over a communications network 726 using a transmission medium.
- the instructions 724 may be transmitted using the network interface device 720 and any one of a number of well-known transfer protocols (e.g., HTTP).
- Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi and WiMAX networks).
- POTS plain old telephone
- Wi-Fi and WiMAX networks wireless data networks.
- transmission medium shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
- Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples.
- An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times.
- Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
Abstract
Description
- This patent application claims the benefit of priority of U.S. Provisional Patent Application Ser. No. 61/948,102, filed on Mar. 5, 2014, which is hereby incorporated by reference herein in its entirety.
- This document relates generally to computer-aided orthopedic surgery, and more specifically to systems and methods for computer-aided alignment and positioning of a prosthesis onto a bone surface.
- The use of computers, robotics, and imaging to aid orthopedic surgery is well known in the art. There has been a great deal of study and development of computer-aided navigation and robotics systems used to guide surgical procedures. For example, a precision freehand sculptor employs a robotic surgery system to assist the surgeon in accurately cutting a bone into a desired shape. In interventions such as total hip replacement, computer-aided surgery techniques have been used to improve the accuracy and reliability of the surgery. Orthopedic surgery guided by images has also been found useful in preplanning and guiding the correct anatomical position of displaced bone fragments in fractures, allowing a good fixation by osteosynthesis.
- Dysfunctional joints such as hips or knees may require surgical treatment in case of arthritis, avascular necrosis, or other debilitating pathological conditions. Joint resurfacing arthroplasty is a joint replacement procedure where a portion of the joint tissue, such as an articulation surface, is replaced by a resurfacing prosthesis. Joint resurfacing arthroplasty has been found to be a viable alternative to the traditional total joint replacement, such as total knee or total hip replacement, in certain patients particularly younger and active patients with relatively strong bones. As an example, hip resurfacing arthroplasty involves replacing worn cartilage and damaged bone tissue in the acetabulum with a cup-shaped prosthesis liner in the acetabulum. The femoral head can be trimmed and reshaped, and a femoral prosthesis cap can be permanently affixed to the trimmed femoral head. The prosthesis capped femoral head and the acetabular cup can then be reconnected to restore the function of the hip joint.
- Proper positioning of joint resurfacing components can be crucial to the outcome of prosthesis implantation procedures such as joint resurfacing arthroplasty. In hip resurfacing arthroplasty, acetabular cup needs to be properly positioned onto and aligned with the resurfaced acetabulum before the acetabular cup is pressure-fit into the resurfaced acetabulum. Similarly, the femoral cap must be properly aligned to the trimmed host femoral head before the femoral cap can be cemented into the trimmed femoral head. Proper alignment and positioning of resurfacing components onto the respective host bones allows the desirable range and flexibility of joint movement. Malpositioning of prosthesis components, however, can result in impingement, increased rates of dislocation, wear and failure of the prosthesis, among many other complications. In particular, impingement of the femoral neck on the acetabular component of a hip resurfacing may result in femoral neck fracture and loosening of the acetabular component.
- Positioning of prosthesis onto a resurfaced target host bone usually requires a surgeon to mentally map and compare the shape, orientation, and relative positions of the prosthesis components and the target bones. Computer-aided tools can be used to assist the surgeon in better aligning the prosthesis components against the target bones. These methods and tools, however, can be difficult to operate and may suffer from lack of reliability and certainty. For example, some surgical areas can have reduced visualization and access, particularly during minimally invasive surgery and arthroscopic techniques. Identifying the misaligned regions on the prosthesis components and/or on the host target bones can be problematic and imprecise due to the interference of surrounding tissues within the surgical area. Determining and visualizing the correct positions and orientations of the prosthesis with respect to the target bone can be practically difficult. Therefore, the present inventors have recognized that there remains a considerable need for systems and methods that can assist the surgeon in reliably positioning the prosthesis onto the target bone with improved accuracy and consistency.
- Various embodiments described herein can help improve the accuracy and the reliability in positioning a prosthesis such as a resurfacing component onto a target bone such as a host femur or acetabulum. When integrated within, or used in conjunction with, a robotic surgical instrument, the present systems and methods can improve the patient outcome experiencing an orthopedic procedure. For example, a prosthesis positioning and alignment system can include a processor unit and a user interface unit. The processor unit can receive a target bone model including a first data set representing a target bone surface, and a prosthesis model including a second data set representing a prosthesis surface. The prosthesis is configured to at least partially replace an articulation surface of the target bone. Using the target bone surface representation and the prosthesis surface representation, the processor unit can generate an articulation interface representation that indicates spatial misalignment between one or more portions of the prosthesis surface and one or more portions of the target bone surface when the prosthesis model is positioned against the target bone model. The user interface can include a user input module that receives an indication of a change in position of the target bone model or the prosthesis model. The user interface also includes a display module that can display one or more of the target bone model, the prosthesis model, and the articulation interface representation.
- A method embodiment for aligning a prosthesis surface to a target bone surface can comprise the operations of receiving a target bone model which includes a data set representing a target bone surface, a prosthesis model which includes a data set representing a prosthesis surface, and an indication of a position of the target bone model relative to a position of the prosthesis model. The method also comprises generating an articulation interface representation, which is indicative of one or more portions of the prosthesis surface being spatially misaligned with one or more portions of the target bone surface when the two models are positioned against each other at certain locations and orientations. One or more of the target bone model, the prosthesis model, and the articulation interface representation can be displayed on a display module to provide feedback to a system user such as a surgeon and assist the system user in properly positioning the prosthesis on the target bone.
- A machine-readable storage medium embodiment of the present document can include instructions that, when executed by a machine, cause the machine to receive a target bone model including a first data set representing a target bone surface, a prosthesis model including a second data set representing a prosthesis surface, and an indication of a position of the target bone model relative to a position of the prosthesis model. The machine can be caused to generate an articulation interface representation indicative of one or more portions of the prosthesis surface being spatially misaligned with respective one or more portions of the target bone surface when the two models are positioned against each other at certain positions and orientations. The instructions can also cause the machine to display one or more of the target bone model, the prosthesis model, and the articulation interface representation on a display module to provide feedback to a system user.
- This Summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims. Other aspects of the invention will be apparent to persons skilled in the art upon reading and understanding the following detailed description and viewing the drawings that form a part thereof, each of which are not to be taken in a limiting sense. The scope of the present invention is defined by the appended claims and their legal equivalents.
- Various embodiments are illustrated by way of example in the figures of the accompanying drawings. Such embodiments are demonstrative and not intended to be exhaustive or exclusive embodiments of the present subject matter.
-
FIG. 1 is a block diagram that illustrates an example of a prosthesis alignment and positioning system. -
FIG. 2 is a block diagram that illustrates an example of an articulation interface generator in a prosthesis alignment and positioning system. -
FIGS. 3A-D illustrate examples of articulation interface representations between a femoral surface and a prosthesis surface at one position of the prosthesis. -
FIGS. 4A-D illustrate examples of articulation interface representations between a femoral surface and a prosthesis surface at another position of the prosthesis. -
FIG. 5 is a flowchart that illustrates an example of a method for aligning a prosthesis surface to a target bone surface. -
FIG. 6 is a flowchart that illustrates an example of a method for generating an articulation interface representation. -
FIG. 7 is a block diagram that illustrates an example of a computer system within which instructions for causing the computer system to perform prosthesis alignment may be executed. - Disclosed herein are systems, devices and methods for computer-aided positioning and alignment of a prosthesis component onto a target bone. Various embodiments described herein can help improve the efficacy and the reliability in osteoplasty planning, such as in an orthopedic implant surgery. The methods and devices described herein can also be applicable to planning surgery of pathological bones under various other conditions.
-
FIG. 1 is a block diagram that illustrates an example of aprosthesis alignment system 100 for use in an orthopedic surgery on a target bone. Thesystem 100 includes aprocessor unit 110 and auser interface unit 120. Thesystem 100 can be configured to generate, and present to a system user, a representation of an articulation interface between a prosthesis component and a target bone shaped to host the prosthesis component. Thesystem 100 can also provide the system user with information including degree of alignment between the prosthesis and the target bone. - The
processor unit 110 can include auser input receiver 111, amodel receiver module 112, anarticulation interface generator 113, and analignment index calculator 114. Theuser input receiver 111 can receive a target bone model such as a first data set representing a target bone surface. The target bone surface can be an articulation surface of the target bone, such as acetabular surface, a surface of a proximal or distal extremity of a femur, a surface of a proximal or distal extremity of a tibia, or a surface of any other bone in a body. The target bone model can include a medical image, a point cloud, a parametric model, or other morphological description of the target bone. The medical images can include two-dimensional (2D) or three-dimensional (3D) images. - Examples of the medical images include an X-ray, an ultrasound image, a computed tomography (CT) scan, a magnetic resonance (MR) image, a positron emission tomography (PET) image, a single-photon emission computed tomography (SPECT) image, or an arthrogram. The target bone model can include shape data, appearance data, or data representing other morphological characteristics of the target bone surface. The shape data may include geometric characteristics of a bone such as landmarks, surfaces, boundaries of 3D images objections. The appearance data may include both geometric characteristics and intensity information of a bone.
- The
user input receiver 111 can be coupled to theuser interface unit 120, such as via auser input module 121, to receive the target bone model. Theuser input module 121 can receive the target bone model from a patient database. Theuser input module 121 can alternatively be coupled to an imaging system or other image acquisition module within or external to thesystem 100. The imagining system or the image acquisition module can feed the target bone model (e.g., one or more images or point clouds) to thesystem 100 via theuser input module 121. - The
model receiver module 112 can receive a prosthesis model such as a second data set representing a prosthesis surface. The prosthesis is configured to at least partially replace the articulation surface of the target bone when the prosthesis surface is aligned with the articulation surface. The prosthesis model can include information such as shape or appearance of the prosthesis surface. The prosthesis model can be in a form of a parametric model, a statistical model, a shape-based model, a volumetric model, an elastic model, a geometric spine model, or a finite element model. In some embodiments, the target bone model received from theuser input module 121 has data format or modality comparable to the prosthesis model received from themodel receiver module 112. For example, theuser input module 121 receives a 3D graphical representation such as a medical image of the surface of a femoral head. Accordingly, themodel receiver module 112 receives a 3D graphical representation such as a computer-simulated image of the prosthesis surface. - The
articulation interface generator 113, coupled to themodel receiver module 112 and theuser input receiver 111, is configured to generate an articulation interface representation using both the target bone surface representation and the prosthesis surface representation. The articulation interface representation can be indicative of one or more portions of the prosthesis surface being spatially misaligned with one or more portions of the target bone surface when the two models are positioned against each other in specified positions. The articulation interface representation can include a color-coded representation, an annotative representation, or other formats of representations or overlays of two or more different representations. The articulation interface representation provides feedback to the system user such as a surgeon, and assists the system user in properly positioning the prosthesis on the target bone. Examples of thearticulation interface generator 113 are discussed below, such as with reference ofFIG. 2 . - The
alignment index calculator 114 can be configured to calculate an alignment index using the articulation interface representation. The alignment index can be a measure of an overall disconformity between the target bone surface and the prosthesis surface. In an example, L can be computed as a square-root of the sum of squared regional similarity measures among all segments, that is: -
L=√{square root over (Σt=1 N ∥X(i),Y(i)∥2)} - The alignment index can also include statistical of disconformity measurements among various regions of the articulation interface representation. For example, the alignment index can include maximum, minimum, average, median, range, histogram, or spatial distributions of the disconformities among multiple regions of the target bone surface and the prosthesis surface. The
alignment index calculator 114 can further determine that a desirable alignment between the prosthesis surface and the target bone surface has been achieved when the alignment index meets a specified criterion. - The
user interface unit 120 can include auser input module 121 and adisplay module 122. As discussed above, theuser input module 121 can receive a target bone model including a representation of the target bone surface and provide the target bone model to theprocessor unit 110 through theuser input receiver 111. Theuser input module 121 can be communicatively coupled to an external module for generating or storing the target bone model. - The
user input module 121 can include one or more pointing devices, such as a mouse, a trackball, or a touch screen, that are connected to theuser interface unit 120. The pointing device enables system users to issue user commands to thesystem 100. For example, theuser input module 121 can receive a user command that selectively alters one or more properties of the target bone model or one or more properties of the prosthesis model. Examples of user command includes translating, rotating, reflecting, scaling, stretching, shrinking, or performing any other manipulations or combinations of the manipulations over one or both of the target bone model or the prosthesis model. The user command can also include a plurality of specified angles of views or projections of the target bone model, the prosthesis model, or the articulation interface representation. The user command can further include selection, deselection, change in position, or change in orientation of either or both of the target bone model and the prosthesis model. In an example, when only one model (either the target bone model or the prosthesis model) is selected and repositioned, theuser input module 121 can receive a change of position of the selected model relative to the other model. In another example, when both the target bone model and the prosthesis model are selected, theuser input module 121 can receive a concurrent change of positions of both models with preserved relative position between the two models. In response to the user command, theprocessor unit 110 can update the presentations of target bone model, the prosthesis model, or the articulation interface representation. In an example, thearticulation interface generator 113 can re-generate an articulation interface representation if there is a change of relative positions or orientations between the target bone surface representation and the prosthesis surface representation. In another example, thearticulation interface generator 113 can perform spatial transformation of the articulation interface representation in response to user command of translating or rotating a model, or projecting a model from a different specified angle. - In some examples, the
user input module 121 can receive a command of animating an image sequence of the target bone model, the prosthesis model, or the articulation interface representation. The command of animation can include parameters controlling the speed (e.g., frame rate), direction, and effects of motion of one or both of the models. The animation can show the relative positions or orientations of one model (e.g., the target bone model) with respect to the other model (e.g., the prosthesis model). - The
display module 122, coupled to theuser input module 121 and theprocessor unit 110, can be configured to display the target bone model, the prosthesis model, and the articulation interface representation. In an example, thedisplay module 122 can generate an articulation interface representation overlaid with one or both of the target bone model, the prosthesis model. Examples of thedisplay module 122 are discussed below, such as with reference ofFIGS. 3A-D andFIGS. 4A-D . -
FIG. 2 is a block diagram that illustrates an example of anarticulation interface generator 113. Thearticulation interface generator 113 can include amisalignment calculator 210 and an articulationinterface representation generator 220. Themisalignment calculator 210, coupled to theuser input receiver 111 and themodel receiver module 112, can calculate a regional misalignment measure (Δ) between the target bone surface representation (X) and the prosthesis surface representation (Y), when the prosthesis model is positioned against the target bone model at a specified position. - The
misalignment calculator 210 can include asegmentation module 211 and a regionalsimilarity calculation module 212. Thesegmentation module 211 can partition the target bone surface representation (X) into a plurality of segments {X(i)} (i=1, 2, . . . , N). Similarly, thesegmentation module 211 can partition the prosthesis surface representation (Y) into a plurality of segments {Y(i)} (i=1, 2, . . . , N). The correspondence between segments X(i) and Y(i) can be established based on their similar locations or morphologies on their respective surface representations. - The segments {X(i)} and {Y(i)} can be created with pre-determined size and shape irrespective of the anatomical or morphologic structures of the target bone surface or the prosthesis surface. In an example where the target bone model (X) and the prosthesis model (Y) are each represented by digital images, the segments can be sized to contain a specified number of pixels of the respective digital images. The size of the segments can determine the spatial resolution of misalignment between the two surface representations X and Y. The smaller the segments, the more of the segments created on the surface representations X or Y; and therefore a finer spatial resolution of the misalignment between X and Y. In some examples, the segments {X(i)} and {Y(i)} can be created using information including anatomical or morphologic structures of the target bone surface X or the prosthesis surface Y. The segments thus created can have non-uniform sizes or shapes, and the misalignment can therefore have a non-uniform spatial resolution of the spatial misalignment is non-uniform as well.
- The regional
similarity calculation module 212 can calculate regional similarity measure D(X(i), Y(i)). D(X(i), Y(i)) is a quantitative measure of the regional misalignment in shapes, morphologies, or topologies between the segments X(i) and Y(i). D(X(i), Y(i)) can be computed using features extracted from the segments X(i) and Y(i). Examples of features can include location such as coordinates in a coordinated system, an orientation, a curvature, a contour, a shape, an area, a volume, or other geometric or volumetric parameters. The features can also include one or more intensity-based parameters. The features can be extracted in the space domain, frequency domain, or space-frequency domain. In various examples, the features may include statistical measurements derived from the geometric or intensity-based parameters, such as the mean, median, mode, variance, covariance, and other second or higher order statistics. - Depending on the types of the features or the data format representing the features (such as the imaging modality or image type), different regional similarity measure D(X(i), Y(i)) can be used. In an example, the extracted features are coordinates of X(i) and Y(i) in a common coordinated system. The regional similarity measure D(X(i), Y(i)) can be computed as one of L1 norm, L2 norm (Euclidian distance), infinite norm, or other distance measurements in the normed vector space. The regional similarity measure D(X(i), Y(i)) can also be computed as signed distance between X(i) and Y(i) along a particular direction in the coordinated system. The sign of D(X(i),Y(i)) can indicate the relative position of Y(i) with respect to X(i). For example, the distance is positive if Y(i) is above X(i) by at least a specified amount, or negative if Y(i) is below X(i) by at least a specified amount. In some examples, the extracted features are intensity-based features; and the similarity measure D(X(i), Y(i)) can be computed as correlation coefficient, mutual information, or ratio image uniformity.
- The articulation
interface representation generator 220 can generate an articulation interface representation (S) using the regional similarity measure D(X(i), Y(i)). The articulation interface representation (S) can be a computer-generated 2D or 3D image with a size comparable to the size of the target bone surface or the size of the prosthesis surface. The articulation interface representation (S) can be segmented in a fashion similar to the segmentation of the target bone surface and the prosthesis surface. For example, the articulation interface representation (S) can be segmented into a plurality of segments {S(i)} for i=1, 2, . . . , N, with the segment S(i) corresponding to the segment of the target bone surface representation X(i) and the segment of the prosthesis surface representation Y(i). - The articulation interface representation (S) can be either one or a combination of a color-
coded representation 221 or anannotative representation 222. In color-coded representation 221, a specified color C(i) can be rendered to segment S(i) of articulation interface representation according to the similarity measure D(X(i), Y(i)), such as when D(X(i), Y(i)) exceeds a specified threshold or falls within a specified range. In one embodiment, the color-coded representation 221 can include at least a first color Ca and a different second color Cb. In an example, Ca and Cb can be two colors having different hues, such as green and red. The color Ca can be used to denote a misalignment in a first direction between a portion of the prosthesis surface and a corresponding portion of the target bone surface. The color Cb can be used to denote a misalignment in a different second direction between a portion of the prosthesis surface and a corresponding portion of the target bone surface. In an example where the similarity measure D(X(i), Y(i)) is computed as a signed distance between X(i) and Y(i) along a particular direction in the coordinated system, Ca is rendered to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the segment of the prosthesis surface Y(i) is above the corresponding segment of the target bone surface X(i) by at least a specified amount; and Cb is rendered to segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment of the prosthesis surface Y(j) is below the corresponding segment of the target bone surface X(j) by at least a specified amount. - The color-
coded representation 221 can additionally include a third color C0 different from the first color Ca and the second color Cb, such as a color having different hue than Ca or Cb. C0 can be rendered to a segment S(k) of the articulation interface representation if the segment of the prosthesis surface Y(k) is within a specified range relative to, and not substantially misaligned with, the segment of the target bone surface X(k). For example, when ∥D(X(k), Y(k))∥ is smaller than a specified threshold, the segments X(k) and Y(k) are regarded as substantially aligned to each other; and color C0 can be applied to the articulation interface segment S(k). - In another example, the color-
coded representation 221 can include a first set of colors {Ca 1, Ca 2, . . . , Ca P} and a different second set of colors {Cb 1, Cb 2, . . . , Cb Q}. The first set {Ca 1, Ca 2, . . . , Ca P} can differ from the second set {Cb 1, Cb 2, . . . , Cb Q} by some easily identifiable characteristic such as hues. The colors within a set can share a common and easily identifiable characteristic such as having the same hue, but differ from each other in at least one color parameter such as saturation or brightness. In an example, the first set includes green colors with different saturation or brightness {G1, G2, . . . , GQ}, and the second set includes red colors with different saturation or brightness {R1, R2, . . . , RP}. - The multiple colors in a color set can be used to differentiate various degrees of misalignment on a particular direction. Each of the first set of colors, such as Ca p, indicates Y(i) being misaligned in a first direction with X(i) by a specified amount. Likewise, each of the second set of colors, such as Cb q, indicates Y(i) being misaligned in a second direction with X(i) by a specified amount. The degree of misalignment can be represented by the magnitude of D(X(i), Y(i)), that is, ∥D(X(i), Y(i))∥. Color Ca p is rendered to segment S(i) if D(X(i), Y(i)) is positive and ∥D(X(i), Y(0)∥ is within a first specified range. Likewise, color Cb q is rendered to segment S(j) if D(X(j), Y(j)) is negative and ∥D(X(i), Y(i))∥ is within a second specified range.
- In
annotative representation 222, a specified annotation A(i) can be applied to a segment of the articulation interface representation, S(i), according to the similarity measure D(X(i), Y(i)). A(i) can be in one or a combination of different forms including signs, labels, lines, texts, or any other markings. In an example, for a region in the articulation interface representation, a first annotation Aa can be applied therein if a portion of the prosthesis surface misaligns in a first direction with a corresponding portion of the target bone surface, or a second different annotation Ab can be applied if a portion of the prosthesis surface misaligns in a second direction with a corresponding portion of the target bone surface. For example, similar to the color-coded representation 211, if the similarity measure D(X(i), Y(i)) is computed as a signed distance between X(i) and Y(i) along a particular direction in the coordinated system, Aa can be applied to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the prosthesis surface Y(i) is above the corresponding segment of the target bone surface X(i) by at least a specified amount. Ab can be applied to segment S(j) if D(X(j), Y(j)) is negative, which indicates that the prosthesis surface Y(j) is below the corresponding segment of the target bone surface X(j). Similar to a color set, theannotative representation 222 can include a first set of annotations {Aa 1, Aa 2, . . . , Aa P} and a different second set of annotations {Ab 1, Ab 2, . . . , Ab Q} to differentiate various degrees of misalignment on a particular direction. The first set {Aa 1, Aa 2, . . . , Aa P} can differ from the second set {Ab 1, Ab 2, . . . , Ab Q} by some easily identifiable characteristic such as labels or signs. - The annotations within a set can share a common and easily identifiable characteristic such as having the same labels or signs (e.g., a “+” label), but differ from each other in at least one characteristic such as font, size, or weight of the labels. An annotation can be selected from the first or the second set and applied to segment S(i) according to D(X(i), Y(i)). For example, an annotation Aa p (e.g., a size-10 “+” label) can be rendered to a segment S(i) if D(X(i), Y(i)) is positive and ∥D(X(i), Y(i))∥ is within a specified range. Likewise, a different annotation Ab q (e.g., a size-8 “−” label) can be rendered to a segment S(j) if D(X(j), Y(j)) is negative and ∥D(X(i), Y(i))∥ is within a specified range. The same annotation can be applied to the entire region of segment S(i), or a partial region of S(i) such as borders of S(i) between S(i) and its neighboring segments. In some examples, the annotation is applied to the borders between the regions having different directions of misalignment, such as the borders between a region of positive misalignment (i.e., prosthesis surface is above the target bone surface) and a neighboring region of negative misalignment (i.e., prosthesis surface is below the target bone surface).
- In various examples, the segment S(i) may be rendered a blend of color-
coded representation 221 andannotative representation 222. For example, a segment S(i) can be green with annotative markings “+” on the borders of S(i). In another example, the segment S(i) can be filled with green-colored labels “+” on the entirety of S(i). - Although articulation
interface representation generator 220 is shown to include one or both of color-coded representation 221 andannotative representation 222 are illustrated inFIG. 2 , other visual representations can also be included in the articulation interface representation for the purpose of differentiating various spatial relationships between the segments Y(i) and X(i). For example, the color-coded representation 221 can be augmented or substituted with a pattern, a texture, or an effect such as shadow, edge enhancement, lightning, or gradient. -
FIGS. 3A-D illustrate examples of anarticulation interface representation 350 between a femoral surface and a prosthesis surface, when aprosthesis model 320 is positioned against adistal femur model 310 at a first position. Thedistal femur model 310 includes afemoral surface representation 312, and theprosthesis model 320 includes aprosthesis surface representation 322. Themodels surface representations articulation interface representation 350 can be generated before and during the surgery and prosthesis placement, and displayed on a monitor, such as by using theprosthesis alignment system 100 or its various embodiments discussed in this document. -
FIG. 3A illustrates the position and orientation of theprosthesis model 320 relative to thefemoral model 310, when the two models are positioned against each other. Thefemoral model 310 and thefemoral surface representation 312 each includes a data set of shape data, appearance data, or data representing other morphological characteristics of the distal femur or the femoral surface, respectively. The shape data may include geometric characteristics such as landmarks, surfaces, boundaries of three-dimensional images objections. The appearance data may include both geometric characteristics and intensity information. - The
prosthesis model 320 and theprosthesis surface representation 322 each includes a data set data having a data format or modality comparable to thedistal femur model 310 and thefemoral surface representation 312. As illustrated inFIG. 3A , thefemoral model 310 and thefemoral surface representation 312 are represented as computer-simulated 2D contour models of the distal femur and the femoral surface. Similarly, theprosthesis model 320 and theprosthesis surface representation 322 are represented as computer-simulated 2D contour models. - The position of the
prosthesis model 320 relative to thefemoral model 310, or the position of theprosthesis surface representation 322 relative to thefemoral surface representation 312, can be described using angles of flexion or extension. A wider angle indicates a higher degree of misalignment between various portions of theprosthesis surface representation 322 and the corresponding portions of thefemoral surface representation 312. As illustrated inFIG. 3A , three regions of misalignment can be identified:region 340A where theportion 322A of the prosthesis surface representation is above or “outside” thecorresponding portion 312A of the femoral surface representation;region 340B where theportion 322B of the prosthesis surface representation is below or “inside” the correspondingportion 312B of the femoral surface representation; andregions 340C where theportion 322C of the prosthesis surface representation is substantially aligned with or “matches” theportion 312C of the femoral surface representation. -
FIGS. 3B-D illustrate different angles of views of thedistal femur model 310, theprosthesis model 320 positioned against thedistal femur model 310, and thearticulation interface representation 350 disposed over theprosthesis model 320. In particular,FIG. 3B illustrates a top view,FIG. 3C illustrates a side view, and FIG. 3D illustrates a view at a specified angle. As illustrated inFIGS. 3B-D , thearticulation interface representation 350 is a color-coded representation that includes three base colors to represent three types of relative positions between various portions of theprosthesis surface representation 322 and the corresponding portions of thefemoral surface representation 312. Alternatively, thearticulation interface representation 350 can be a black-and-white or grayscale representation. In a grayscale representation, various shades of gray would be used to illustrate the articulation interface representation. - The
FIGS. 3B-D are shown in grayscale for ease of viewing. - Corresponding to
region 340A, a first color is rendered toregion 354 to indicate theportion 322A of the prosthesis surface representation is above or “outside” thecorresponding portion 312A of the femoral surface representation. As shown inFIGS. 3B-3D , the first color can include a dark shade pattern. In another example, the first color can include green. Corresponding toregion 340B, a second color is rendered toregion 352 to indicate theportion 322B of the prosthesis surface representation is below or “inside” the correspondingportion 312B of the femoral surface representation. As shown inFIGS. 3B-3D . the second color can include a spotted pattern. In another example, the second color can include red. Corresponding toregion 340C, a third color is rendered toregion 356 to indicate theportion 322C of the prosthesis surface representation is substantially aligned with thecorresponding portion 312C of the femoral surface representation. As shown inFIGS. 3B-3D . the third color can include a dotted pattern. In another example, the third color can include gray. In yet another example, different colors, shades, textural representations, grays, black and white coloring, or the like can be used to represent different regions. - For a particular colored region such as 352, 354 or 356, variations of the base color rendered to that region can be applied to sub-regions within the colored region to differentiate variations in regional similarities such as the distance between the portions of the articulation surface and the portions of the femoral surface along a particular direction in the coordinated system. For example, as illustrated in
FIG. 3A , within theregion 340A, the distance between 322A and 312A at different sub-regions varies. Accordingly, as shown inFIGS. 3B-D , the first color with higher saturation or lower brightness (e.g., darker) can be rendered to the sub-regions of theregion 354 where the dissimilarity or distance between 322A and 312A is greater than other areas of 354. - The
articulation interface representation 350 includes three annotations that represent three types of relative positions between various portions of theprosthesis surface representation 322 and the corresponding portions of thefemoral surface representation 312. As illustrated inFIGS. 3B-D , the three annotatedregions -
FIG. 4A-D illustrate examples ofarticulation interface representation 450 between afemoral surface 312 and aprosthesis surface 322, when aprosthesis model 320 is positioned against adistal femur model 310 at a second position different than the first position as shown inFIGS. 3A-D . The change from the position as shown inFIGS. 3A-D to a different position as shown inFIGS. 4A-D can be achieved by processing a user command such as via theuser input module 121. The user command can also selectively scale, translate, rotate, or perform any combinations of manipulations over one or both of thetarget bone model 310 and theprosthesis model 320. - Compared to that in
FIG. 3A , the flexion angle shown inFIG. 4A is wider, and theprosthesis surface representation 322 is more misaligned with thefemoral surface representation 312. The resulting three regions ofmisalignment 440A-C therefore have different sizes and shapes than theregions 340A-C. For example, there is a higher degree of misalignment in the “above” direction between thesurfaces regions 440A than between thesurfaces regions 340A. Likewise, there is a higher degree of misalignment in the “below” direction between thesurfaces region 440B than between thesurfaces region 340B. Additionally, thesubstantial alignment region 440C has a smaller size than theregion 340A. - Alternatively, as illustrated in
FIGS. 4B-D , thearticulation interface representation 450 can be a black-and-white or grayscale representation. TheFIGS. 4B-4D are shown in grayscale for ease of viewing. Consistent with the changes in regional misalignment shown inFIG. 4A ,FIGS. 4B-D each represent the changes in size and shape of a secondcolored region 452 corresponding toregion 440B, a firstcolored region 454 corresponding toregion 440A, and a thirdcolored region 456 corresponding toregion 440C. As shown inFIGS. 4B-4D , the first, second, and third colors can include a dark shade pattern, a spotted pattern, and a dotted pattern, respectively. In another example, the first, second, and third colors can include green, red, and gray, respectively. In yet another example, different colors, shades, textural representations, grays, black and white coloring, or the like can be used to represent different regions.FIGS. 4B-D are views from different angles of thearticulation interface representation 450, each including three annotations that represent three types of relative positions (above, below, or substantial alignment) between various portions of theprosthesis surface representation 322 and the corresponding portions of thefemoral surface representation 312. - Variations of the base color can be applied to the sub-regions of the
regions FIGS. 4B-D , base colors with different amount of saturation or brightness can be used according to the similarity such as the distance between the portions of thesurface 312 and the corresponding portions ofsurface 322 along a particular direction in a coordinated system. For example, corresponding to greater distance between 422A and 412A inregion 440A compared to that between 322A and 312A inregion 340A, a wider sub-region within theregion 454 can be rendered the first color with higher saturation or lower brightness (e.g., darker). Likewise, corresponding to greater distance between 422B and 412B inregion 440B compared to that between 322B and 312B inregion 340B, a wider sub-region within theregion 452 can be rendered the second color with lower saturation or higher brightness (e.g., brighter). Alternatively, when thearticulation interface representation regions FIGS. 3B-D andFIGS. 4B-D , a darker color, or a pattern with markings (e.g., dots) with a higher density can be used to indicate greater distance between the portions of thesurface 312 and the corresponding portions ofsurface 322 along a particular direction in a coordinated system. -
FIG. 5 is a flowchart that illustrates an example of a method 500 for aligning a prosthesis surface to a target bone surface. The method 500 can be used in orthopedic surgeries such as joint resurfacing arthroplasty. In an embodiment, thesystem 100, including its various embodiments discussed in this document, can perform method 500, including its various embodiments discussed in this document. - The method 500 begins with receiving a target bone model at 510, such as by using a
model receiver module 112. The target bone model can include an articulation surface of the target bone. Examples of the target bone can include an acetabulum, a proximal or distal extremity of a femur, a proximal or distal extremity of a tibia, or any other bone in a body. The target bone can be surgically prepared to host a prosthesis component. At least a portion of the target bone, such as the articulation surface, undergoes surgical alteration, repair, or resection, such that the prosthesis can be securely placed against the target bone to replace the articulation surface of the target bone. - The target bone model can include a data set characterizing geometric characteristics including position, shape, contour, or appearance of the target bone surface. The data set can also include intensity information. In various examples, the target bone model can include at least one medical image such as an X-ray, an ultrasound image, a computed tomography (CT) scan, a magnetic resonance (MR) image, a positron emission tomography (PET) image, a single-photon emission computed tomography (SPECT) image, or an arthrogram, among other 2D or 3D images. The target bone model can be received from a patient database, or from an imaging system or an image acquisition system. In an example, the target bone model is calculated intra-operatively by collecting a cloud of points from the target bone using an optically tracked stylus (pointer) or similar device. In this example, the surface of the target bone is obtained by a surgeon prior to enable the computing system to calculate a target bone model from the collected points. In some examples, multiple methods of creating a target bone model can be combined, such as fitting a database model to an actual bone using collected points.
- At 520, a prosthesis model can be received such as by using the
user input module 121. The prosthesis model includes a data set representing a surface of a prosthesis, which is sized, shaped or configured to at least partially replace the articulation surface of the target bone. The prosthesis model can be in a format of a parametric model, a statistical model, a shape-based model, a volumetric model, an elastic model, a geometric spine model, or a finite element model. In some embodiments, the prosthesis model has a data format or modality comparable to the target bone model. - At 530, information of relative positions between the target bone model and the prosthesis model is received, such as via a user input module that enables a system user to interactively select or deselect one or both of the target bone model or the prosthesis model, and alter one or more properties thereof. The received relative positions can include indication of a position of the prosthesis surface representation relative to the target bone surface representation when the prosthesis surface is positioned against the target bone surface. In an example, the position of the target bone surface representation and the position of the prosthesis surface representation can be characterized by their respective coordinates in a common coordinated system. In some examples, an indication of a change in positions of one or both of target bone surface representation and the prosthesis surface representation can be received. In addition to the indication of position change, other properties of the models that can be altered include translation, rotation, reflection, scaling, stretching, shrinking, or any other manipulations or any combination of the manipulations over one or both of the target bone model and the prosthesis model. In some examples, manipulations of the prosthesis model, such as scaling, stretching or shrinking, are restricted based on the actual prosthesics available for implant.
- At 540, an articulation interface representation is generated, such as by using the
articulation interface generator 113. The articulation interface representation indicates spatial misalignment between one or more portions of the prosthesis surface and the respective one or more portions of the target bone surface, when the two models are positioned against each other along a specified direction. The articulation interface representation (S) can have similar data format as the target bone surface representation (X) or the prosthesis surface representation (Y). In an example, the articulation interface representation (S) can be a computer-generated 2D or 3D image with a size comparable to the size of the target bone surface representation or the size of the prosthesis surface representation. The articulation interface representation can include a color-coded representation, an annotative representation, or other markup representations or overlays of two or more different representations to assist the surgeon in reliably positioning the prosthesis onto the target bone with improved accuracy and consistency. An alignment index can also be computed using the articulation interface representation. Example methods of generating the articulation interface representation are discussed below, such as with reference ofFIG. 6 . - At 550, one or more of the target bone model, the prosthesis model, and the articulation interface representation can be displayed such as on a monitor or other display module. The graphical representation of one or both of the target bone model or the prosthesis model can be overlaid with the articulation interface representation. The display module or the monitor can also interactively display the user commands such as selecting, positioning, scaling, translating, rotating, reflecting, stretching, shrinking, or any other manipulations or any combination of the manipulations over one or more the target bone model, the prosthesis model, or the articulation interface representation. Also displayed can include a plurality of specified angles of views or projections of the target bone model, the prosthesis model, or the articulation interface representation. The computed alignment index can also be displayed, which can be used by the system user to adjust relative positions of the prosthesis surface representation and the target bone surface representation to achieve a desired alignment.
-
FIG. 6 is a flowchart that illustrates an example of amethod 600 for generating an articulation interface representation and determining a desirable alignment between the prosthesis model and the target bone model. Themethod 600 can be an example of the articulationinterface generation process 540 illustrated inFIG. 5 . - Using the received target bone surface representation (X), the prosthesis surface representation (Y), and the indication of the relative positions between the two surface representations, such as those provided in 510-530, regional misalignments (Δ) between various regions of the two surfaces can be calculated at 610, such as by using the
misalignment calculator 210. The regional misalignment (Δ) can be a quantified representation of the discrepancies between the shapes, morphologies, or topologies of multiple regions created on the target bone surface representation (X) and the prosthesis surface representation (Y). In an embodiment, the regional misalignment (Δ) can be computed by partition the target bone surface representation (X) into a plurality of segments {X(i)} (i=1, 2, . . . , N), and in a similar fashion portioning the prosthesis surface representation (Y) into a plurality of segments {Y(i)} (i=1, 2, . . . , N). The segments, {X(i)} and {Y(i)}, can be created with pre-determined size and shape irrespective of the anatomical or morphologic structures of the target bone surface or the prosthesis surface. For example, when the target bone model (X) and the prosthesis model (Y) are each represented by digital images, segments {X(i)} and {Y(i)} can be sized to contain a specified number of pixels of the respective digital images. Features, such as geometric features, morphological features, statistical features, or intensity-based features, can be extracted from each target bone surface segment X(i) and the corresponding prosthesis surface segment Y(i), and a regional similarity measure D(X(i), Y(i)) between the segments X(i) and Y(i) can be computed. Examples of D(X(i), Y(i)) can include L1 norm, L2 norm (Euclidian distance), infinite norm, or other distance-based measures in a vector space; correlation coefficient, mutual information, or ratio image uniformity. D(X(i), Y(i)) can also be computed as signed distance between X(i) and Y(i) along a particular direction in the coordinated system. The sign of D(X(i),Y(i)) can indicate the relative position of Y(i) with respect to X(i) when Y(i) and X(i) are positioned against each other. For example, the distance is positive if Y(i) is above X(i) by at least a specified amount, and negative if Y(i) is below X(i) by at least a specified amount. - At 620, a visual representation of an articulation interface can be generated using the calculated regional misalignments. The visual representation can include a color-coded representation, an annotative representation, or a combination of the two such as a colored annotation. The articulation interface representation (S) can be a computer-generated 2D or 3D image with a size and shape comparable the target bone surface representation (X) or the prosthesis surface representation (Y). Using a similar segmentation process, the articulation interface representation (S) can be segmented into a plurality of segments {S(i)} for i=1, 2, . . . N, with the segment S(i) corresponding to the segment of the target bone surface representation X(i) and the segment of the prosthesis surface representation Y(i).
- In a color-coded representation, a specified color, C(i), can be rendered to a segment of the articulation interface representation, S(i), if the similarity measure D(X(i), Y(i)) meets a specified criterion, such as exceeding a specified threshold or falling within a specified range. In an example where the similarity measure D(X(i), Y(i)) is computed as a signed distance between X(i) and Y(i) along a particular direction in the coordinated system, a first color Ca (e.g., a green color) can be rendered to segment S(i) if D(X(i), Y(0) is positive, which indicates that the segment Y(i) is misaligned in a first direction relative to the corresponding segment X(i), such as being positioned above X(i) by at least a specified amount. A different second color Cb (e.g., a red color) can be rendered to a segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment Y(j) is in a different second direction relative to the corresponding segment X(j), such as being positioned below X(j) by at least a specified amount. Additionally, a third color C0 (e.g., a gray color), different from Ca and Cb, can be rendered to a segment S(k) if the magnitude of D(X(j), Y(j)), ∥D(X(i), Y(i))∥, is below a specified threshold, which indicates that the prosthesis surface segment Y(k) is substantially aligned with the corresponding target bone surface segment X(k).
- Tn some examples, a set of colors {Ca 1, Ca 2, . . . , Ca P} can be used to represent various degrees of misalignment in the first direction between X(i) and Y(i), such as when Y(i) is positioned above X(i). Similarly, a different set of colors {Cb 1, Cb 2, Cb Q} can be used to represent various degrees of misalignment in the second direction between X(i) and Y(i), such as when Y(i) is positioned below X(i). A correspondence, such as a lookup table, between the multiple colors in the color sets and the value ranges of D(X(j), Y(j)) can be established. The similarity measure D(X(i), Y(i)) for a segment S(i) can be compared to multiple threshold values or the range values, and one color can be selected from the color sets {Ca 1, Ca 2, . . . , Ca P} or {Cb 1, Cb 2, . . . , Cb Q}, and rendered to the segment S(i). In various examples, the color-coded representation can be augmented or substituted by a pattern, a texture, or a style with effects such as shadow, edge enhancement, lightning, or gradient.
- In an annotative representation, a specified annotation, such as one or a combination of different forms of annotations including signs, labels, lines, texts, or any other markings, can be rendered to an articulation interface representation segment S(i) according to the similarity measure D(X(i), Y(i)). Similar to the color-coded representation, a first annotation Aa can be rendered to segment S(i) if D(X(i), Y(i)) is positive, which indicates that the prosthesis surface segment Y(i) is above the corresponding target bone surface segment X(i) by at least a specified amount. A different second annotation Ab can be rendered to a different segment S(j) if D(X(j), Y(j)) is negative, which indicates that the segment Y(j) is below the segment X(j) by at least a specified amount. Additionally, a third annotation A0 can be rendered to a segment S(k) if ∥D(X(i), Y(i))∥ is below a specified threshold, indicating the prosthesis surface segment Y(k) is substantially aligned with the target bone surface segment. Multiple annotations in a annotation set can be used to differentiate various degrees of misalignment between segment Y(i) and X(i), such as by comparing D(X(i), Y(i)) to multiple threshold values or range values, a similar method as discussed above in color-coded representations.
- At 630, a user input can be received such as via a user input module. The user input can include an indication of selecting or deselecting one or more of the target bone surface representation, the prosthesis surface representation, or the articulation interface representation. The user input can also include change in positions of one or more of target bone surface representation, the prosthesis surface representation, or the articulation interface representation.
- Relative position change between the target bone model and the prosthesis model is then detected at 640. In an example, both the target bone model and the prosthesis model are selected, such that their positions are concurrently changed while the relative positions or orientations between them are preserved. In another example, only one of the target bone model or the prosthesis model is selected and repositioned, and a relative position change is detected at 640.
- If a relative change between the models is detected at 640, then at least some segments of the articulation interface can have different regional similarity measure D(X(i), Y(i)). The regional misalignments can then be recalculated at 610. However, if no relative change is detected at 640, then an alignment index is computed at 650. The alignment index can be a measure of an overall disconformity between the target bone surface representation and the prosthesis surface representation. In an example, L can be computed as a square-root of the sum of squared regional similarity measures among all segments. The alignment index can also include statistical measurements of disconformities such as maximum, minimum, average, median, range, histogram, or other distributional representations of the disconformities among multiple regions of the target bone surface and the articulation interface.
- The user input detected at 630 can also include an indication of the angles of view or projections of the target bone model, the prosthesis model, or the articulation interface representation. For example, as illustrated in
FIGS. 3A-D andFIGS. 4A-D , multiple angles of views of a computer-generated 3D image of the distal femur model and the associated femoral surface representation can be generated and displayed, including a top view, a side view, a front view, or and at any perspective obtained after specified rotation. If no change of viewing angle is detected at 660, then the generated articulation interface representation can be displayed at 550. However, if a change of viewing angle (such as by rotating a model) is detected at 660, then the articulation interface representation can be transformed at 670 in accordance with the change of angle of view. The transformed articulation interface can be displayed at 550. -
FIG. 7 is a block diagram that illustrates an example of a machine in the form of acomputer system 700 within which instructions, for causing the computer system to perform any one or more of the methods discussed herein, may be executed. In various embodiments, the machine can operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a PDA, a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computer system 700 includes a processor 702 (such as a central processing unit (CPU), a graphics processing unit (GPU), or both), amain memory 704 and astatic memory 706, which communicate with each other via abus 708. Thecomputer system 700 may further include a video display unit 710 (such as a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alpha-numeric input device 712 (such as a keyboard), a user interface (UT) navigation device (or cursor control device) 714 (such as a mouse), adisk drive unit 716, a signal generation device 718 (e.g., a speaker) and anetwork interface device 720. - The
disk drive unit 716 includes a machine-readable storage medium 722 on which is stored one or more sets of instructions and data structures (e.g., software) 724 embodying or used by any one or more of the methods or functions described herein. Theinstructions 724 may also reside, completely or at least partially, within themain memory 704,static memory 706, and/or within theprocessor 702 during execution thereof by thecomputer system 700, themain memory 704 and theprocessor 702 also constituting machine-readable media. In an example, theinstructions 724 stored in the machine-readable storage medium 722 include instructions causing thecomputer system 700 to receive target bone model such as a first data set representing a target bone surface, and to receive a prosthesis model such as a second data set representing a prosthesis surface. The prosthesis component can be configured to be positioned against the target bone such as to partially replace the articulation surface of the target bone. Theinstructions 724 can also store theinstructions 724 that cause thecomputer system 700 to generate an articulation interface representation such as one or a combination of a color-coded representation, an annotative representation, or other markup representations or overlays of two or more different representations using both the target bone surface representation and the prosthesis surface representation, and calculate an alignment index that represents overall disconformity between the target bone surface and the articulation interface. - To direct the
computer system 700 to generate the articulation interface representation, the machine-readable storage medium 722 may further store theinstructions 724 that cause thecomputer system 700 to receive a user input including an indication of a change in position of one or both of the prosthesis surface model or the target bone surface model, and to update the articulation interface representation in response to the user input including the indication of the change in relative positions between the prosthesis surface model or the target bone surface model. The instructions in the machine-readable storage medium 722 may also cause thecomputer system 700 to generate representation illustrating one or more of the target bone model, the prosthesis model, and the articulation interface representation. - While the machine-
readable medium 722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures. The term “machine-readable storage medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methods of the present invention, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including by way of example, semiconductor memory devices (e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. A “machine-readable storage medium” shall also include devices that may be interpreted as transitory, such as register memory, processor cache, and RAM, among others. The definitions provided herein of machine-readable medium and machine-readable storage medium are applicable even if the machine-readable medium is further characterized as being “non-transitory.” For example, any addition of “non-transitory,” such as non-transitory machine-readable storage medium, is intended to continue to encompass register memory, processor cache and RAM, among other memory devices. - In various examples, the
instructions 724 may further be transmitted or received over acommunications network 726 using a transmission medium. Theinstructions 724 may be transmitted using thenetwork interface device 720 and any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi and WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. - The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
- In the event of inconsistent usages between this document and any documents so incorporated by reference, the usage in this document controls.
- In this document, the terms “a” or “an” arc used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
- Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, in an example, the code can be tangibly stored on one or more volatile, non-transitory, or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.
- The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. § 1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
Claims (42)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/540,850 US20190365471A1 (en) | 2014-03-05 | 2019-08-14 | Computer-aided prosthesis alignment |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461948102P | 2014-03-05 | 2014-03-05 | |
PCT/US2015/018711 WO2015134592A1 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
US201615123700A | 2016-09-06 | 2016-09-06 | |
US16/540,850 US20190365471A1 (en) | 2014-03-05 | 2019-08-14 | Computer-aided prosthesis alignment |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/018711 Continuation WO2015134592A1 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
US15/123,700 Continuation US10420611B2 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190365471A1 true US20190365471A1 (en) | 2019-12-05 |
Family
ID=54016238
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/123,700 Active 2035-03-18 US10420611B2 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
US14/638,279 Active 2038-02-22 US10470821B2 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
US16/540,850 Pending US20190365471A1 (en) | 2014-03-05 | 2019-08-14 | Computer-aided prosthesis alignment |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/123,700 Active 2035-03-18 US10420611B2 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
US14/638,279 Active 2038-02-22 US10470821B2 (en) | 2014-03-05 | 2015-03-04 | Computer-aided prosthesis alignment |
Country Status (6)
Country | Link |
---|---|
US (3) | US10420611B2 (en) |
EP (1) | EP3113711A4 (en) |
JP (1) | JP2017511726A (en) |
CN (2) | CN106470635B (en) |
AU (2) | AU2015227303B2 (en) |
WO (1) | WO2015134592A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6121406B2 (en) | 2011-06-16 | 2017-04-26 | スミス アンド ネフュー インコーポレイテッド | Surgical alignment using criteria |
US20160296289A1 (en) * | 2013-03-15 | 2016-10-13 | Concepto Llc | Custom matched joint prosthesis replacement |
AU2015222825B2 (en) | 2014-02-28 | 2019-10-31 | Blue Belt Technologies, Inc. | System and methods for positioning bone cut guide |
JP2017506574A (en) | 2014-02-28 | 2017-03-09 | ブルー・ベルト・テクノロジーズ・インコーポレーテッド | System and method for positioning a bone cutting guide |
US10420611B2 (en) | 2014-03-05 | 2019-09-24 | Blue Belt Technologies, Inc. | Computer-aided prosthesis alignment |
EP3355259A1 (en) * | 2015-09-24 | 2018-08-01 | Fujifilm Corporation | Repair plan devising assistance system, method, and program |
IL245334B (en) * | 2016-04-21 | 2018-10-31 | Elbit Systems Ltd | Head wearable display reliability verification |
AU2017295728B2 (en) | 2016-07-15 | 2021-03-25 | Mako Surgical Corp. | Systems for a robotic-assisted revision procedure |
US11076919B1 (en) | 2017-05-19 | 2021-08-03 | Smith & Nephew, Inc. | Surgical tool position tracking and scoring system |
US10660707B2 (en) * | 2017-12-19 | 2020-05-26 | Biosense Webster (Israel) Ltd. | ENT bone distance color coded face maps |
US11890058B2 (en) | 2021-01-21 | 2024-02-06 | Arthrex, Inc. | Orthopaedic planning systems and methods of repair |
US11759216B2 (en) | 2021-09-22 | 2023-09-19 | Arthrex, Inc. | Orthopaedic fusion planning systems and methods of repair |
EP4275641A1 (en) * | 2022-05-12 | 2023-11-15 | Stryker European Operations Limited | Technique for visualizing a planned implant |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090209851A1 (en) * | 2008-01-09 | 2009-08-20 | Stryker Leibinger Gmbh & Co. Kg | Stereotactic computer assisted surgery method and system |
US20100324692A1 (en) * | 2007-04-17 | 2010-12-23 | Biomet Manufacturing Corp. | Method and Apparatus for Manufacturing an Implant |
US20120271599A1 (en) * | 2006-12-12 | 2012-10-25 | Perception Raisonnement Action En Medecine | System and method for determining an optimal type and position of an implant |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5086401A (en) | 1990-05-11 | 1992-02-04 | International Business Machines Corporation | Image-directed robotic system for precise robotic surgery including redundant consistency checking |
US6205411B1 (en) * | 1997-02-21 | 2001-03-20 | Carnegie Mellon University | Computer-assisted surgery planner and intra-operative guidance system |
FR2816200A1 (en) * | 2000-11-06 | 2002-05-10 | Praxim | DETERMINING THE POSITION OF A KNEE PROSTHESIS |
US20030153978A1 (en) | 2002-02-08 | 2003-08-14 | Whiteside Biomechanics, Inc. | Apparatus and method of ligament balancing and component fit check in total knee arthroplasty |
US9155544B2 (en) | 2002-03-20 | 2015-10-13 | P Tech, Llc | Robotic systems and methods |
US7559931B2 (en) | 2003-06-09 | 2009-07-14 | OrthAlign, Inc. | Surgical orientation system and method |
US20090210059A1 (en) * | 2005-04-06 | 2009-08-20 | Mccombe Peter Francis | Vertebral Disc Prosthesis |
GB0610079D0 (en) | 2006-05-22 | 2006-06-28 | Finsbury Dev Ltd | Method & system |
CN101310673A (en) * | 2007-05-22 | 2008-11-26 | 中国科学院理化技术研究所 | Plug and play micro multiparameter recorder for recording human physiology information |
JP2009056299A (en) * | 2007-08-07 | 2009-03-19 | Stryker Leibinger Gmbh & Co Kg | Method of and system for planning surgery |
WO2009105665A1 (en) | 2008-02-20 | 2009-08-27 | Mako Surgical Corp. | Implant planning using corrected captured joint motion information |
RU2013110140A (en) * | 2010-08-13 | 2014-09-20 | Смит Энд Нефью, Инк. | SYSTEMS AND METHODS OF OPTIMIZATION OF PARAMETERS OF ORTHOPEDIC PROCEDURES |
CN103167847A (en) | 2010-08-25 | 2013-06-19 | 史密夫和内修有限公司 | Intraoperative scanning for implant optimization |
US20120276509A1 (en) * | 2010-10-29 | 2012-11-01 | The Cleveland Clinic Foundation | System of preoperative planning and provision of patient-specific surgical aids |
EP2685924B1 (en) * | 2011-03-17 | 2016-10-26 | Brainlab AG | Method for preparing the reconstruction of a damaged bone structure |
CN106943216B (en) * | 2011-04-06 | 2019-12-31 | 德普伊新特斯产品有限责任公司 | Instrument assembly for implanting revision hip prosthesis |
JP6073875B2 (en) * | 2011-06-22 | 2017-02-01 | シンセス・ゲーエムベーハーSynthes GmbH | Bone maneuvering assembly with position tracking system |
US9572682B2 (en) | 2011-09-29 | 2017-02-21 | Arthromeda, Inc. | System and method for precise prosthesis positioning in hip arthroplasty |
CA2871950C (en) * | 2012-05-24 | 2020-08-25 | Zimmer, Inc. | Patient-specific instrumentation and method for articular joint repair |
EP2916778B1 (en) * | 2012-11-09 | 2021-08-11 | Blue Belt Technologies, Inc. | Systems for navigation and control of an implant positioning device |
US10420611B2 (en) | 2014-03-05 | 2019-09-24 | Blue Belt Technologies, Inc. | Computer-aided prosthesis alignment |
EP3389568A2 (en) | 2015-12-17 | 2018-10-24 | Materialise N.V. | Pre-operative determination of implant configuration for soft-tissue balancing in orthopedic surgery |
-
2015
- 2015-03-04 US US15/123,700 patent/US10420611B2/en active Active
- 2015-03-04 CN CN201580023213.9A patent/CN106470635B/en active Active
- 2015-03-04 JP JP2016555700A patent/JP2017511726A/en active Pending
- 2015-03-04 AU AU2015227303A patent/AU2015227303B2/en active Active
- 2015-03-04 US US14/638,279 patent/US10470821B2/en active Active
- 2015-03-04 CN CN201911387502.3A patent/CN111150489B/en active Active
- 2015-03-04 EP EP15758908.6A patent/EP3113711A4/en active Pending
- 2015-03-04 WO PCT/US2015/018711 patent/WO2015134592A1/en active Application Filing
-
2019
- 2019-08-14 US US16/540,850 patent/US20190365471A1/en active Pending
-
2020
- 2020-01-06 AU AU2020200069A patent/AU2020200069B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120271599A1 (en) * | 2006-12-12 | 2012-10-25 | Perception Raisonnement Action En Medecine | System and method for determining an optimal type and position of an implant |
US20100324692A1 (en) * | 2007-04-17 | 2010-12-23 | Biomet Manufacturing Corp. | Method and Apparatus for Manufacturing an Implant |
US20090209851A1 (en) * | 2008-01-09 | 2009-08-20 | Stryker Leibinger Gmbh & Co. Kg | Stereotactic computer assisted surgery method and system |
Non-Patent Citations (1)
Title |
---|
Bernhardt_2012 (Comparison of Bone-Implant Contact and Bone-Implant Volume Between 2D-Histological Sections and 3D-SRuCT Slices, European Cells and Materials Vol. 23 2012 (pages 237 – 248) DOI: 10.22203/eCM.v023a18) (Year: 2012) * |
Also Published As
Publication number | Publication date |
---|---|
CN106470635A (en) | 2017-03-01 |
US20170014189A1 (en) | 2017-01-19 |
AU2015227303A1 (en) | 2016-09-29 |
US20150250553A1 (en) | 2015-09-10 |
US10420611B2 (en) | 2019-09-24 |
CN111150489A (en) | 2020-05-15 |
CN106470635B (en) | 2020-01-17 |
AU2020200069A1 (en) | 2020-01-30 |
JP2017511726A (en) | 2017-04-27 |
CN111150489B (en) | 2023-06-13 |
EP3113711A1 (en) | 2017-01-11 |
AU2020200069B2 (en) | 2022-02-24 |
EP3113711A4 (en) | 2017-10-18 |
US10470821B2 (en) | 2019-11-12 |
WO2015134592A1 (en) | 2015-09-11 |
AU2015227303B2 (en) | 2019-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2020200069B2 (en) | Computer-aided prosthesis alignment | |
JP7203148B2 (en) | Systems and methods for intraoperative image analysis | |
US11642174B2 (en) | Systems and methods for intra-operative image analysis | |
US20240096508A1 (en) | Systems and methods for using generic anatomy models in surgical planning | |
US9514533B2 (en) | Method for determining bone resection on a deformed bone surface from few parameters | |
US20230277331A1 (en) | Method and Apparatus for Implant Size Determination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: BLUE BELT TECHNOLOGIES, INC., PENNSYLVANIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JARAMAZ, BRANISLAV;NIKOU, CONSTANTINOS;REEL/FRAME:050387/0973 Effective date: 20190725 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |