WO2021087483A1 - Predictive modeling platform for serial casting to correct orthopedic deformities - Google Patents

Predictive modeling platform for serial casting to correct orthopedic deformities Download PDF

Info

Publication number
WO2021087483A1
WO2021087483A1 PCT/US2020/058597 US2020058597W WO2021087483A1 WO 2021087483 A1 WO2021087483 A1 WO 2021087483A1 US 2020058597 W US2020058597 W US 2020058597W WO 2021087483 A1 WO2021087483 A1 WO 2021087483A1
Authority
WO
WIPO (PCT)
Prior art keywords
deformity
anatomical alignment
cast
dimensional
force vectors
Prior art date
Application number
PCT/US2020/058597
Other languages
French (fr)
Inventor
Anuradha Dayal
Reza MONFAREDI
Kevin Cleary
Matthew Oetgen
Original Assignee
BabySteps Orthopedics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/864,099 external-priority patent/US11783102B2/en
Application filed by BabySteps Orthopedics Inc. filed Critical BabySteps Orthopedics Inc.
Priority to EP20812183.0A priority Critical patent/EP4052273A1/en
Publication of WO2021087483A1 publication Critical patent/WO2021087483A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • Musculoskeletal disorders such as neuromuscular (NM), musculoskeletal (MSK) disorders can be congenital or acquired.
  • Some musculoskeletal disorders can be treated in pediatric and adult populations through serial casting.
  • treating musculoskeletal disorders using serial casting can be difficult because of difficulties with access to care and the relative subjectivity of the treatment.
  • common pediatric orthopedic deformities such as Talipes equinovarus or congenital talipes equinovarus (commonly called clubfoot) have been difficult to treat.
  • clubfoot the current standard of care afforded to correct this skeletal deformity is the Ponseti serial casting methodology, in which the deformity is corrected using a weekly series of casts.
  • FIG. 1 A is a block diagram illustrating a system for modelling force vectors for serial casts to correct orthopedic deformities in accordance with various embodiments taught herein;
  • FIG. IB is an illustration of the corrective vector forces for correcting an orthopedic deformity in accordance with various embodiments taught herein;
  • FIG. 1C is an illustration of a point cloud of a simulated cast superimposed on a point cloud of a real cast in accordance with various embodiments taught herein;
  • FIG. ID is a data table comparing simulation data for a cast geometry as taught herein with actual data for a formed cast in accordance with various embodiments taught herein;
  • FIG. 2 is an illustration of a series of casts generated to correct the orthopedic deformity in accordance with various embodiments taught herein;
  • FIG. 3 A and 3B are illustrations of selecting reference points and determining the planes for correction in accordance with various embodiments taught herein;
  • FIG. 3C, 3D, and 3E are illustrations of corrections of deformities using finite element analysis in accordance with various embodiments taught herein;
  • FIG. 4A is an illustration of the process of determining a next cast in a series of casts based on the three-dimensional image of the deformity and the boundary conditions in accordance with various embodiments taught herein;
  • FIG. 4B is an illustration of the post-processing in accordance with the various embodiments taught herein;
  • FIG. 4C is an illustration of force vectors at selected reference points in accordance with the various embodiments taught herein;
  • FIG. 5 is an illustration a finite element analysis modelling in accordance with various embodiments taught herein;
  • FIG. 6A illustrates a flowchart for image acquisition of the deformity and generation of a predicted virtual casting model in accordance with various embodiments taught herein;
  • FIG. 6B illustrates a flow chart for determining the number of stages and trajectory of the predicted movement in accordance with various embodiments taught herein; and [0017]
  • FIG. 7 illustrates an exemplary computing system for determining the force vectors for correcting a deformity in accordance with various embodiments taught herein.
  • Serial casting corrects a three-dimensional musculoskeletal deformity, neuromuscular deformity or both through periodic manipulation of the deformity. For example, in the case of deformity of clubfoot, the congenital musculoskeletal deformity is corrected through weekly manipulation of the deformity of the foot in a step-wise process using serial casts. Often times correction is in multiple three-directional planes simultaneously. Conventionally, this manipulation is a very manual process, labor intensive and embodies an imprecise prediction of subsequent steps and outcomes.
  • Embodiments of the present disclosure include systems and methods for modelling a set of force vectors for a cast or a next cast in a series of casts to correct the musculoskeletal deformity, the neuromuscular deformity or both that overcome the difficulties and problems described herein with respect to conventional serial casting techniques.
  • the system includes a processor that executes machine readable instructions to receive an image of the deformity.
  • the processor generates a three-dimensional model of the deformity based on the image of the deformity.
  • the processor determines a deviation between a three-dimensional stereo-typical anatomical alignment and the three- dimensional model of the deformity (i.e., boundary conditions of the deformity) and generates a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient records.
  • the processor also simulates a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment.
  • the simulation generates a projected anatomical alignment post-treatment with the next cast on the deformity using finite element analysis for each of the trial force vectors, and identifies a next force vector from the plurality of trial force vectors that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment, such that using the next force vector for the next cast results in an average differential in the range of 2 mm to 20 mm between a point cloud of the next cast and a point cloud of a prior cast for a similar correction based on prior patient data.
  • the camera is an array of cameras, an x-ray imager, an ultrasound scanner, a three-dimensional scanner, an magnetic resonance imaging device, a CT scan or a combination thereof.
  • the camera can be a depth detection camera or include a light detection and ranging (LIDAR) sensor that produces a point cloud of the deformity.
  • LIDAR light detection and ranging
  • the system can determine boundary conditions for the deformity based on the three-dimensional image of the deformity and medical data from a plurality of persons.
  • the medical data from the plurality of persons can include information about stereo-typical anatomical alignment, anatomical alignment of prior patient deformities prior to correction, intermediate anatomical alignment of patient deformities during correction and final alignment of patient deformities post correction.
  • the medical data for prior patient deformities can be derived from discarded casts of prior patients used during their treatment and scans of the patient deformities during their treatment.
  • the medical data for prior patient deformities can be derived from medical records generated during patient visits such as notes from the doctor, 2- dimensional images such as x-rays, measurements of force vectors such as spasticity and the like.
  • the medical data can be converted to a point cloud that represents the three-dimensional model of the anatomical alignment(s) for the patient, the stereo-typical person or both.
  • the plurality of prior patient data includes an original image of the deformity of the patient(s), intermediate images of the deformity of the patient(s) and an image of the final corrected deformity of the patient(s).
  • the plurality of prior patient data includes data generated from a plurality of scans or images of prior discarded casts of patients used during treatment of the deformity.
  • the system can generate a series of forces at a plurality of nodes in the three-dimensional model of the deformity and determine a resulting change in the anatomical alignment of the deformity after a certain period of time based on the application of the series of forces at the nodes.
  • the system can determine which of the plurality of trial force vectors minimizes the deviation between the projected anatomical alignment and the next intermediate anatomical alignment based on a finite element analysis machine learning model, wherein the finite element analysis machine learning model is trained on medical data that can include a plurality of point clouds of next force vectors that were selected form the plurality of trial vectors during treatments in prior patients.
  • the system can minimize the deviation between the projected anatomical alignment and the next intermediate anatomical alignment in multiple directions to reduce the deviation between the three- dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity in multiple directions simultaneously after the next cast is applied to the deformity.
  • the system can minimize the difference between the projected anatomical alignment and the next intermediate anatomical alignment in a direction with the maximum deviation between the three- dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity after the next cast is applied to the deformity.
  • FIG. 1 A illustrates a system 100 to capture an image of the deformity according to the present disclosure.
  • the system 100 includes a camera 102 (shown in FIG. 1 A as an array of cameras) configured to capture a three-dimensional image of the deformity and a plurality of light sources 106.
  • the system 100 can include a calibration target 110.
  • the calibration target 110 include checkerboard patterns, socks with or without identification patterns and the like.
  • the calibration target 110 can be a checkerboard pattern that can attached on a flat board that can move relative to the camera 102 to acquire calibration images of the calibration target 110 with various poses relative to the camera 102.
  • the system 100 can use a calibration target 110 for subjects where the deformity is kept relatively still.
  • the use of a calibration target with multiple cameras allows the system 100 to capture a three-dimensional images and compensate for movement.
  • the camera 102 can be an array of cameras.
  • the system 100 can generate a three-dimensional image of the deformity by stitching all the images from the array of cameras.
  • the camera can be a digital camera, or a video camera, an ultrasound imaging system, MRI or a CT scan.
  • the system 100 can compensate for movement of the subject using image processing to obtain an accurate representation of the deformity in three-dimensions.
  • the camera 102 can capture be a depth sensing camera (e.g., Microsoft KinectTM), which produces a point cloud of the deformity.
  • the system 100 can convert a three- dimensional image of the deformity into a point cloud.
  • the camera 102 can capture imagery data for generating a point cloud of the deformity based on discarded casts used during treatment of a patient.
  • the system 100 can acquire an image of the deformity from a mobile device such as a phone or tablet camera.
  • the system 100 can receive an image captured from a mobile device that captures the deformity from different angles.
  • the system 100 can then stitch the images together to create a three-dimensional image.
  • the system 100 can determine a deviation or average deviation on a point by point basis between a three-dimensional model of the deformity and the three-dimensional stereo-typical anatomical alignment of the human anatomy in question.
  • the system 100 can include a database or corpus storing medical data of the individuals.
  • the medical data of the individuals can include three-dimensional models of stereo-typical anatomical alignment of the human anatomy, three-dimensional models of deformities that deviate from the stereo-typical anatomical alignment based on medical records of prior patients with deformities and the like.
  • the system 100 can use one or more point clouds representing a three dimensional model of a cast or deformity to perform shape analysis instead of the more conventional mesh models.
  • shape analysis methods have operated on solid and surface models of objects, especially tessellated models (i.e., triangular mesh surface models).
  • Shape analysis is concerned with understanding the shape of models geometrically, topologically, and relationally.
  • the system 100 can use shape analysis to group the deformities in the database based on the type of the deformity, segment the deformities based on the shape into sub-shapes, and find complementary deformities based on the shape of the deformity. For example, the system 100 can query the database with the medical data using a point cloud of a deformity that has been acquired to return a matching point cloud of a similar deformity.
  • the system 100 can generate a next intermediate anatomical alignment in a series of three-dimensional alignments based on a machine learning model 116 trained on prior patient data from the database or corpus.
  • the system 100 can include a computing device 112.
  • the computing device 112 can include a machine learning trainer 114 to generate the machine learning model 116.
  • the system 100 can generate a machine learning model based on supervised learning, unsupervised learning or reinforcement learning.
  • the machine learning trainer 114 can be implemented as a machine learning computing device that also stores and allows use of the machine learning model 116.
  • the machine learning trainer 114 can analyze a set of training data that includes a classification of the data that the machine learning trainer 114 can use to calibrate its algorithm to identify what lies within a class or is outside a class. For example, a convolutional neural network or deep learning neural network trained on three-dimensional models of club foot can classify a new three- dimensional model acquired by the system 100 based on the trained machine learning model 116.
  • the system 100 can generate the machine learning model 116 that can be used to generate a next intermediate three-dimensional anatomical alignment in a series of intermediate three-dimensional anatomical alignments to correct the orthopedic deformities.
  • the system 100 can receive training data that includes medical data of patients who were treated to correct a deformity such as three-dimensional images of the uncorrected deformity, three-dimensional images of the intermediate anatomical alignments achieved during treatment of the deformity, and the three-dimensional images of the final corrected anatomical alignment achieved after treatment.
  • the training data can also include three-dimensional anatomical alignments of stereo-typical persons.
  • the training data can be generated from discarded casts of patients who were treated for a deformity.
  • the system 100 can use the prior discarded casts to approximate the deformity at each stage of the correction process where the three-dimensional images of the foot are not available.
  • the training data can include patient data obtained during the treatment of a patient using the Ponseti method.
  • machine learning models analyze data from a plurality of prior patients to identify mean shapes and shape variations.
  • the system 100 can then determine boundary conditions of the machine learning model 116 to classify a new three-dimensional surface model of a deformity acquired from a new patient to determine whether the deformity of the new patient falls within the boundary of a particular type of deformity or to identify a similar deformity in the prior medical records that closely match the shape, type or both.
  • the system 100 can then determine a next three-dimensional intermediate anatomical alignment in a series of anatomical alignments based on the prior identified deformity.
  • the machine learning model 116 can be trained to output the desired correction angles to correct the deformity to the next intermediate alignment, i.e., the desired angles of correction that will result in the next three-dimensional anatomical alignment for correcting the deformity.
  • the system 100 can train the machine learning model based on prior patient data for a plurality of patients such an original three-dimensional image of the deformity, images of intermediate stages of correction of the deformity and the final image of the corrected deformity.
  • the system can use three- dimensional scans of prior discarded casts of patients to determine the original deformity, stages of correction of the deformity and the final corrected deformity.
  • the system 100 can simulate trial force vectors for a next cast that can correct the deformity to the next intermediate anatomical alignment using finite element analysis at the nodes of the three-dimensional model of the deformity.
  • the system 100 can simulate the projected anatomical alignment for each trial force vector post-treatment with the next cast based on finite element analysis of the three-dimensional model of the deformity.
  • the system 100 can apply finite element analysis with the trial force vectors acting at the nodes of the three-dimensional model of the deformity to determine the projected anatomical alignment post-treatment with the next cast.
  • the system 100 can generate training data for the machine learning model based on modelling and analysis software such as ANSYS. Modelling and simulation software can be used to deform a 3D three-dimensional CAD model of a normal foot into a plurality of virtually generated CAD models (e.g., 500 models), for example, clubfoot CAD models, with different degrees and angles of deformity potentially seen during the correction sequence.
  • system 100 can use supervised learning and the system 100 can receive inputs from an orthopedic surgeon (e.g., pediatric orthopedic surgeon) to review the models for accuracy.
  • the system 100 can export the CAD models as point cloud models for anatomical classification/labeling of the generated models for the machine learning model.
  • the system 100 can use the point cloud model of the foot to identify the severity of the clubfoot deformity by determining the amount of deviation of the foot with respect to normal pose in four different directions as shown in FIG. IB.
  • FIG. IB illustrates the deformities in clubfoot such as the equinus deformity 124, the varus deformity 126, the calcancopedal derotation 128 and the horizontal plane deformity relative to hindfoot 130.
  • the system 100 can capture the variation in these deformities using the camera 102.
  • the machine learning model can be evolved to improve the accuracy of the model over time.
  • the system 100 can use a deep learning method such as a PointNet to process the point cloud models.
  • PointNet is an open source platform for classification of point cloud models. Since the point cloud model is randomly oriented, they use a bounding box that fits into the model, and normalizes the point cloud to always align the point cloud model in a certain direction before feeding it into the deep learning network as input datasets.
  • the system 100 can use PointNet to classify different stages of an orthopedic skeletal deformity, for example, clubfoot deformity.
  • the system 100 can use the point cloud CAD models generated using simulation software to train a deep learning network to objectively classify and label each patient’s unique foot deformity compared to a normal foot.
  • the system 100 can then train the network to predict the cast series for each subject patient in this study.
  • the system 100 can use supervised learning based on inputs obtained by presenting an orthopedic skeletal deformity model, for example, a clubfoot model, to one or more orthopedic surgeons.
  • the models can be presented with a selection of candidate foot correction models (e.g., out of five hundred foot models) that is the next in the correction series based on the Ponseti method.
  • the system 100 can then receive inputs from the doctors on a consensus basis and select the next correction phase out of the selection of candidate foot models (e.g., 10 models). Over the course of multiple rounds of selection (e.g., 500 rounds), the system 100 trains the deep learning network to search the training dataset and output the subsequent cast for deformity correction.
  • the system 100 can generate an STL file for three- dimensional printing using a three-dimensional printer 122.
  • the system 100 can select a next force vector for the next cast from the trial force vectors by identifying the force vector that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment.
  • using the next force vector in the next cast results in an average difference in the range of 2 mm to 20 mm between a point in a point cloud of the next cast and a corresponding point in a point cloud of a prior cast for a similar correction in a prior patient data.
  • the system 100 can determine the shape and dimensions of the next cast within a range of 2 mm to 20 mm when compared with a trained and skilled doctor with years of experience performing the casting using subjective parameters.
  • the system 100 can select a next force vector for the next cast from the trial force vectors by identifying the force vector that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment.
  • the next force vector for the next cast results in an average difference in the range of 0.5 degrees to 4 degrees between the angles in a point cloud of the next cast and the corresponding angles in a point cloud of a prior cast for a similar correction in a prior patient data.
  • the system 100 can determine the dimensions and shape of the next cast in a series of casts for correcting the deformity to a high degree of accuracy without the subjective judgement of a highly skilled doctor.
  • the system 100 can generate a point cloud 138 of the three-dimensional scan of the cast 136 of a prior patient as shown in FIG.
  • FIG. 1C illustrates a point cloud of the simulated cast 134 generated by the system 100 when compared to the point cloud of a real cast 132 placed by a highly skilled orthopedic to correct the deformity.
  • the average difference between the dimensions of the point cloud of the simulated cast 134 and the point cloud of the real cast 132 can be in the range of 2 mm to 20 mm on a point to point basis between corresponding points in a point cloud of the simulated next cast 134 and a point cloud of a prior cast 132 for a similar correction in a prior patient data.
  • the average difference can be in the range of 0.5 degrees to 4 degrees between the angles in a point cloud of the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for a similar correction in a prior patient.
  • the average difference between the angles of the real 132 and simulated casts 134 was 0.93 degrees for equinus and 0.74 degrees for de-rotation.
  • the average difference in three-dimensional point clouds between the corresponding points of the real cast 132 and simulated casts 134 was 2.305 mm.
  • the table represents the difference between the angles on the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for similar correction in a prior patient.
  • the table illustrates that the difference between the angles in a point cloud of the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for Equinus correction 136, de-rotation correction 138 and the average difference 140 between point clouds.
  • the system 100 can generate a cast or a series of casts as shown in Fig. 2 to correct the orthopedic deformity.
  • the system 100 can generate a cast or a series of casts that can be three-dimensional printed.
  • the system 100 determines a force vector or a set of force vectors to correct the orthopedic deformity based on three-dimensional imagery of the deformity and the machine learning model 116 can generate the next intermediate anatomical alignment for correcting the deviation.
  • next intermediate anatomical alignment can be the desired corrected angles for correcting the deformity in the next cast.
  • the system 100 can determine the shape and geometry of a cast or a series of casts 202-210 that exert the determined force vector or set of force vectors that are tailored to the patient. In an exemplary embodiment, the system 100 can determine the force vectors for the series of casts 202-210 to correct the deformed foot 214 with three-dimensional deformities shown along the x, y and z axis to arrive at the corrected foot 212. In an exemplary embodiment, the system 100 selects the force vector or set of force vectors such that the right areas 216 that are structurally designed in normal foot of children to distribute the load when walking is in the same plane and perform the load bearing function once corrected.
  • FIG. 3A, 3B, 3C, and 3D the system 100 can determine the reference points for generating the corrective plane.
  • FIG. 3 A and 3B illustrates the reference points for generating the corrective planes.
  • FIG. 3C illustrates before correction cavus image 310 and after correction cavus image 312 generated by the system 100 using finite element analysis.
  • FIG. 3C also illustrates the before correction Adductus image 314 and the after correction Adductus image 316 generated by the system 100 using finite element analysis.
  • FIG. 3D and FIG. 3E illustrates the before correction Varus image 318 and the after correction Varus image 320 (from two different points of view) generated by the system 100 using finite element analysis.
  • the system 100 can determine the reference points for an adductus deformity based on the big toe 305, little toe 303, and ankle 301 as shown in a three-dimensional image of the adductus deformity 302 in FIG. 3A. In an exemplary embodiment, the system 100 can use these reference points to generate the reference plane 323 as shown in FIG. 3B. The system 100 can use the reference plane 323 to determine the force vectors to correct the adductus deformity 314, 316 as shown in FIG. 3C.
  • the system 100 can determine the reference points for correcting an equinus deformity based on the ankle 311, heel 313, mid plantar of foot 315, and thigh 307 as shown in a three-dimensional image of the equinus deformity 304 as shown in FIG 3A.
  • the system 100 can use these reference points to generate the reference plane 325 to determine the force vectors to correct the equinus deformity as shown in FIG. 3B.
  • the system 100 can then determine the force vector or set of force vectors to correct an equinus deformity based on the reference plane 327.
  • the system 100 can determine the reference points for correcting a cavus deformity based on the big toe 319, middle of toes 317, pinky toe 321 and heal 315 as shown in a three- dimensional image of the cavus deformity 306 in FIG. 3A.
  • the system 100 can determine the reference plane 323 based on these reference points as shown in FIG 3B.
  • the system 100 can then determine the force vector or set of force vectors to correct the cavus deformity 310, 312 as shown in FIG 3C.
  • FIG. 3B illustrates identification of four different planes using these reference points shown in FIG. 3 A.
  • system 100 can determine these reference points using image processing algorithm or a machine learning algorithm that recognizes features of the limb.
  • the machine learning model can be trained on deformed foot to identify these features.
  • the system 100 can be configured to determine the boundary conditions of the deformity based on a machine learning model.
  • the boundary conditions of the deformity can be the desired angles for corrected foot 402.
  • FIG. 4A illustrates the process of generating a final model of force vectors for correcting a deformity based on the boundary conditions and the 3d model of the deformity.
  • FIG 4B illustrates the process of generating a solid model based on 3d scans.
  • FIG. 4C illustrates the force vectors at the reference points determined by the system 100 in accordance with an exemplary embodiment described herein.
  • the system 100 can obtain a three-dimensional image or data 404 of the deformity.
  • the system 100 can generate a three-dimensional model of the deformity either as a solid object or as a point cloud.
  • FIG. 4B illustrates a method of generating a three-dimensional solid object in a modelling software based on the images of the foot.
  • the system 100 can convert the three-dimensional scan images of the deformity into an STL file 414.
  • the system 100 can then post-process the STL file 414 to fill in any missing information using a post-processing tool (e.g., Spacclaim).
  • the system 100 can then convert the post-processed file into a solid three-dimensional object in the modelling software for further analysis.
  • the system 100 can determine multiple separate planes for the deformity as shown in 406 to serve as a reference between the position of the deformity and the expected or normal mean position of the limb or other appendage.
  • the system 100 can select the number of planes based on the geometry of the deformity, the degrees of freedom of the deformity, the deviation of the deformity from a statistical normal limb or appendage and the like.
  • the system 100 can determine the reference points as described above with reference to FIG. 3 AGE.
  • the system 100 can select four different planes based on a machine learning model for club foot.
  • the four planes can be selected with inputs from a doctor.
  • the system 100 can use the machine learning trainer 114 to determine a four plane machine learning model that identifies the appropriate planes to use for correction.
  • the machine learning trainer 114 can use data from a plurality of prior patients that includes planes that were selected for correction for the patients compared and the geometry of the deformity and the outcome of the corrective effort.
  • the system 100 can then fit the three-dimensional data 404 of the deformity based on the trained four plane machine learning model. Once the four planes are identified the system 100 can use finite element analysis 408 to determine the force vectors for correcting the deformity in each plane.
  • the system 100 can determine the force vectors at the reference points as illustrated in FIG. 4A and FIG. 4C using finite element analysis.
  • the system 100 can determine the force vectors 422 as shown in FIG. 4C at the reference points for correcting the deformity.
  • the system 100 can apply a corrective force and determine the predicted correction such as the predicted angles for the corrected foot based on the applied force. The system 100 can then compare it with the boundary conditions such as desired angles for the corrected foot. The system 100 can iterate or simulate 410 for various force corrections then update the force vectors to reduce the error or minimize the error.
  • the system 100 can use the boundary conditions 402 and the angle between the four planes and the boundary conditions 402 to determine the force vectors required during finite element analysis for correcting the deformity in each plane and generating for the next cast in the series of casts.
  • the system 100 can determine the angle between the four planes using the modelling tools (e.g., Ansys, Matlab or both).
  • the modelling tools e.g., Ansys, Matlab or both.
  • FIG. 4A illustrates the use of two modelling tools (e.g., Ansys and Matlab), to perform the various methods
  • the system 100 can use one or more modelling tools to perform the various methods.
  • the system 100 can use the boundary conditions to iterate through a series of force vectors to minimize the error between the boundary condition and the results of applying a particular force vector in a particular plane.
  • the system 100 can run a series of simulations using a trial correction and then determine the probable corrected deformity.
  • the system 100 can as shown in the FIG. 4 A iterates over a number of simulations until a force vector or a set of force vectors for the next series cast such as final model 412 is obtained.
  • the system can determine the force vector or set of force vectors with the minimum deviation from the boundary condition using the iterative process. In an exemplary embodiment when the angle between the boundary conditions 402 and the probable corrected deformity is minimal the error is minimum. In an exemplary embodiment the system 100 can track the angle between the boundary condition and the predicted or estimated corrected plane if a force vector or set of force vectors is applied for each simulation in real-time 412.
  • the system 100 can generate data for split casts based on the final model 412.
  • the split cast can include a portion that is not changed during at least a part of the series of casts and a portion that is updated during the next cast in the series of casts.
  • the system 100 can use the machine learning trainer 114 to determine the finite element analysis machine learning model.
  • the finite element analysis machine learning model can be based force vectors that correspond to points in the point cloud determined from prior simulations for a plurality of prior patients.
  • the data for the plurality of prior patients can include the force vectors generated using simulations for a next cast in a series of casts, the boundary conditions used to arrive at the force vectors and the correction achieved as evident from the subsequent three-dimensional image of the deformity after it was corrected with the cast can be used to train the finite element analysis machine learning model.
  • the finite element analysis machine learning model can generate a force vector or a set of force vectors for a cast or a series of casts based on the boundary conditions given manually or obtained from the finite element analysis machine learning model.
  • the system 100 can run supervised learning, unsupervised learning, reinforcement learning algorithms or any combination thereof.
  • machine learning algorithms that can be implemented via the computing device 112 can include, but are not limited to Linear Regression, Logical Regression, Decision Tree, Support Vector Machine, Naive Bayes, k-Nearest Neighbors, k-Means, Random Forest, Dimensionality Reduction algorithms such as GBM, XGBoost, LightGBM and CatBoost.
  • Examples of supervised learning algorithms that can be used in the computing device 112 can include regression, decision tree, random forest, k-Nearest Neighbors, Support Vector Machine, and Logic Regression.
  • Examples of unsupervised learning algorithms that may be used in the computing device 112 include apriori algorithm and k-means.
  • Examples of reinforcement learning algorithms that may be used in computing device 112 includes a Markov decision process.
  • the system 100 can apply finite element analysis manually.
  • the system 100 can generate a three-dimensional scanned stereolighography (STL) file based on the three-dimensional image from the camera 102.
  • the STL file can be post processed to clean up any irregularities.
  • the system 100 can remove any imperfections in the STL file such as from motion during capture of the three-dimensional image using image processing algorithms.
  • the system 100 can convert the three- dimensional image to a three-dimensional solid model.
  • the system 100 can use the multiple points present in the STL file and generate a solid shape of the deformity that are connected using extrapolations to generate a surface instead of multiple discrete points.
  • the system 100 can create reference points for generating the planes for finite element analysis of the force vector or set of force vectors to be applied to the deformity to correct the deformity.
  • the system 100 can receive a selection of reference points for generating a correction plane from the doctor.
  • a machine learning algorithm can select the reference points based on a trained machine learning model as described herein above.
  • the system 100 can simulate the application of the force vector to the deformity and the effect of the force vector on the points of support for the deformity.
  • the system 100 can calculate the deformation and the stresses when the force vector is applied.
  • the system 100 can determine the deformation of the deformity and the stresses on the deformity when the force vector or set of force vectors is applied via a cast.
  • the system can calculate the deformation on the reference points for plane creation.
  • the system 100 can determine the deformation on the chosen reference points in the deformity to determine the effect of the force vectors on the deformity.
  • the system 100 can calculate the angle between the planes of the selected reference place and the desired boundary condition 502.
  • the system can convert the generated final model data into a next cast in the series of casts.
  • the system 100 can convert the final model into an STL file for three-dimensional printing.
  • three-dimensional images of the deformity can be acquired.
  • the three-dimensional images of the deformity can be acquired by a mobile device of a parent.
  • the three-dimensional imaging can include depth mapping of the foot of the patient.
  • the three-dimensional imaging can include ultrasound for the determination of internal biological structures of the foot.
  • the combination of the two above-described three-dimensional imaging modalities allows for improved cast planning by considering the internal structures in addition to the outward appearance.
  • casting stages of patient anatomy movement can be predicted.
  • the casting stages can be predicted via the force vector modeling described herein above. In one instance, this prediction can include computer predictive modeling and finite element analysis of the foot wherein stresses, deformations of the structures of the foot or both are considered from one stage to the next.
  • Sub process 665 will be further described with reference to FIG. 6B.
  • virtual models of three-dimensional casts can be generated for anticipated patient anatomy movements at each predicted stage. Such virtual models of three-dimensional casts can allow for visualization and modification according to real-world constraints.
  • three-dimensional casts similar to that of FIG. 5, can be generated for each virtual model at each predicted stage of patient anatomy movement.
  • sub process 665 of process 655 includes determining the number of stages and trajectory of each predicted stage of movement.
  • a baseline patient 5 anatomy can be established according to the acquired images of the patient anatomy.
  • a target patient anatomy can be selected, the target being an end goal shape of the structure of the foot.
  • the patient anatomy movement at each stage can be determined.
  • This determination can include movements of structures of the foot.
  • such movements can be determined in the context of the Ponseti stages and include, for instance, performing specific angular rotations at specific stages.
  • such movements can be optimized at each stage such that maximum movement is achieved without creating undue mechanical and/or biological stresses.
  • each stage may be determined such that von Mises stress, for instance, remain below a threshold value.
  • the current position of the patient anatomy can be compared with the target patient anatomy position from step 668. If the two values are equal, for instance, only a single stage of casting may be required and the determined patient anatomy movement can be used to generate a virtual model of a necessary three-dimensional cast at step 670. If, however, the current position and the target patient anatomy do not match, a successive stage of patient anatomy movement is required and the sub process 665 returns to step 667.
  • the number of stages, or cast, required to be fitted to a patient is dependent upon the severity of the deformity and the ability to move the patient anatomy at each stage. In the case of clubfoot, this can mean the difference of manufacturing four casts in one instance and six casts in another, thereby allowing each patient to receive only the minimum necessary number of casts.
  • the above described method of FIG. 6 A and FIG. 6B can be performed with only external features gathered via, for instance, depth mapping data. External features can be processed similarly to Schoenecker, et al., Systems and methods for serial treatment of a muscular-skeletal deformity, U.S. Patent Application Publication No. US2017/0091411 Al, incorporated herein by reference.
  • the external features can be applied to a machine learning algorithm in order to generate patient anatomy predictions without need for ultrasound imaging.
  • a library of corresponding images of a foot may be stored.
  • the corresponding images can include images of the external features of the foot and corresponding images of the internal features of the foot.
  • the machine learning algorithm a convolutional neural network in exemplary embodiment, can be trained to correlate external features with internal features. Therefore, when provided with an external feature of an unknown foot, the machine learning algorithm can generate a corresponding internal feature structure that can be used in determining patient anatomy movements during stage planning.
  • the library of corresponding images can be a corpus of patient data acquired from patients of a similar diagnosis and healthy patients.
  • FIG. 7 is a block diagram of an exemplary embodiment of computing device 112 in accordance with embodiments of the present disclosure.
  • the computing device 112 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments.
  • the non-transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like.
  • memory 119 included in the computing device 112 can store computer-readable and computer-executable instructions or software for performing the operations disclosed herein.
  • the memory 119 can store a software application 640 which is configured to perform several of the disclosed operations (e.g., the pre-training platform for determining the co-occurrence matrix, the training platform for determining the word vectors and the topic determination platform for determining the plurality of topics and the representative noun).
  • a software application 640 which is configured to perform several of the disclosed operations (e.g., the pre-training platform for determining the co-occurrence matrix, the training platform for determining the word vectors and the topic determination platform for determining the plurality of topics and the representative noun).
  • the computing device 610 can also include configurable, programmable processor 120 or both and an associated core(s) 614, and optionally, one or more additional configurable, programmable processing devices or both, e.g., processor(s) 612’ and associated core(s) 614’ (for example, in the case of computational devices having multiple processors/cores), for executing computer-readable and computer-executable instructions or software application 640 stored in the memory 119 and other programs for controlling system hardware.
  • Processor 120 and processor(s) 612’ can each be a single-core processor or multiple core (614 and 614’) processor.
  • Virtualization can be employed in the computing device 610 so that infrastructure and resources in the computing device can be shared dynamically.
  • a virtual machine 624 can be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
  • Memory 119 can include a computational device memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 119 can include other types of memory as well, or combinations thereof.
  • a user can interact with the computing device 710 (shown in FIG. 1 as 112) through a visual display device 701, such as a computer monitor, which can display one or more user interfaces 742 that can be provided in accordance with exemplary embodiments.
  • the computing device 710 can include other EO devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 718, a pointing device 720 (e.g., a mouse).
  • the keyboard and the pointing device 720 can be coupled to the visual display device 701.
  • the computing device 710 can include other suitable conventional EO peripherals.
  • the computing device 710 can also include one or more storage devices such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer- readable instructions, software that perform operations disclosed herein or both.
  • Exemplary storage device 734 can also store one or more databases for storing any suitable information required to implement exemplary embodiments. The databases can be updated manually or automatically at any suitable time to add, delete, update one or more items in the databases.
  • the computing device 710 can include a communication device 744 configured to interface via one or more network devices 732 with one or more networks, for example,
  • the communication device 744 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem, radio frequency transceiver, or any other device suitable for interfacing the computing device 710 to any type of network capable of communication and performing the operations described herein.
  • the computing device 710 can be any computational device, such as a workstation, desktop computer, server, laptop, handheld computer, tablet computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
  • the computing device 710 can run any operating system 726, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing device and performing the operations described herein.
  • the operating system 726 can be run in native mode or emulated mode.
  • the operating system 726 can be run on one or more cloud machine instances.
  • Exemplary flowcharts are provided herein for illustrative purposes and are non limiting examples of methods.
  • One of ordinary skill in the art will recognize that exemplary methods may include more or fewer steps than those illustrated in the exemplary flowcharts and that the steps in the exemplary flowcharts may be performed in a different order than the order shown in the illustrative flowcharts.

Abstract

A system and method are provided herein for modelling of force vectors for a cast to correct orthopedic deformities receives an image of the deformity, determines a deviation between a three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity, generates a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient data, simulates a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment, wherein the simulation produces a projected anatomical alignment post-treatment with the next cast on the deformity using finite element analysis for each of the trial force vectors, and identifies a next force vector from the plurality of trial force vectors that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment.

Description

PREDICTIVE MODELING PLATFORM FOR SERIAL CASTING TO CORRECT
ORTHOPEDIC DEFORMITIES
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority to U.S. Patent Application No.
17/073,183, filed on October 16, 2020, which is a continuation-in-part of the U.S. Patent Application No. 16/864,099, filed on April 30, 2020, which in turn claims the benefit of U.S. Provisional Patent Application No. 62/841,012, filed on April 30, 2019, and also claims the benefit of U.S. Provisional Patent Application No. 62/928,808, filed on October 31, 2019, all four of the aforesaid applications are incorporated herein by reference in their entirety.
BACKGROUND
[0002] Musculoskeletal disorders such as neuromuscular (NM), musculoskeletal (MSK) disorders can be congenital or acquired. Some musculoskeletal disorders can be treated in pediatric and adult populations through serial casting. Conventionally, treating musculoskeletal disorders using serial casting can be difficult because of difficulties with access to care and the relative subjectivity of the treatment. For example, common pediatric orthopedic deformities such as Talipes equinovarus or congenital talipes equinovarus (commonly called clubfoot) have been difficult to treat. In the case of clubfoot, the current standard of care afforded to correct this skeletal deformity is the Ponseti serial casting methodology, in which the deformity is corrected using a weekly series of casts. Limitations of this method include the need for highly trained surgeons proficient in this method and frequent weekly visits to the orthopedic surgeon for placement of the casts. Even when skilled doctors trained in the method are available, there is a lot of variability and subjectivity in determining the next step of the serial cast. Variability in casting technique and inability to predict treatment length lead to difficulties in standardization of the treatment course of serial clubfoot correction. Similar difficulties exist with respect to treating patients with other congenital NM, congenital MSK, acquired NM, and acquired MSK disorders using serial casts.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings: [0004] FIG. 1 A is a block diagram illustrating a system for modelling force vectors for serial casts to correct orthopedic deformities in accordance with various embodiments taught herein;
[0005] FIG. IB is an illustration of the corrective vector forces for correcting an orthopedic deformity in accordance with various embodiments taught herein;
[0006] FIG. 1C is an illustration of a point cloud of a simulated cast superimposed on a point cloud of a real cast in accordance with various embodiments taught herein;
[0007] FIG. ID is a data table comparing simulation data for a cast geometry as taught herein with actual data for a formed cast in accordance with various embodiments taught herein;
[0008] FIG. 2 is an illustration of a series of casts generated to correct the orthopedic deformity in accordance with various embodiments taught herein;
[0009] FIG. 3 A and 3B are illustrations of selecting reference points and determining the planes for correction in accordance with various embodiments taught herein;
[0010] FIG. 3C, 3D, and 3E are illustrations of corrections of deformities using finite element analysis in accordance with various embodiments taught herein;
[0011] FIG. 4A is an illustration of the process of determining a next cast in a series of casts based on the three-dimensional image of the deformity and the boundary conditions in accordance with various embodiments taught herein;
[0012] FIG. 4B is an illustration of the post-processing in accordance with the various embodiments taught herein;
[0013] FIG. 4C is an illustration of force vectors at selected reference points in accordance with the various embodiments taught herein;
[0014] FIG. 5 is an illustration a finite element analysis modelling in accordance with various embodiments taught herein;
[0015] FIG. 6A illustrates a flowchart for image acquisition of the deformity and generation of a predicted virtual casting model in accordance with various embodiments taught herein;
[0016] FIG. 6B illustrates a flow chart for determining the number of stages and trajectory of the predicted movement in accordance with various embodiments taught herein; and [0017] FIG. 7 illustrates an exemplary computing system for determining the force vectors for correcting a deformity in accordance with various embodiments taught herein.
DETAILED DESCRIPTION
Serial casting corrects a three-dimensional musculoskeletal deformity, neuromuscular deformity or both through periodic manipulation of the deformity. For example, in the case of deformity of clubfoot, the congenital musculoskeletal deformity is corrected through weekly manipulation of the deformity of the foot in a step-wise process using serial casts. Often times correction is in multiple three-directional planes simultaneously. Conventionally, this manipulation is a very manual process, labor intensive and embodies an imprecise prediction of subsequent steps and outcomes. Although, computer modelling for serial casts to correct musculoskeletal deformity, neuromuscular deformity or both exists, the conventional computer modeling requires a linear approach of approximating a series of points and lines to determine a specific direction in which a cast in the series of casts applies a force to the deformity. However, the linear approach does not account for the three-dimensional deformities of the clubfoot within the cavus, adductus, varus, equinus and derotational elements.
[0018] Embodiments of the present disclosure include systems and methods for modelling a set of force vectors for a cast or a next cast in a series of casts to correct the musculoskeletal deformity, the neuromuscular deformity or both that overcome the difficulties and problems described herein with respect to conventional serial casting techniques. In exemplary embodiments, the system includes a processor that executes machine readable instructions to receive an image of the deformity. In turn the processor generates a three-dimensional model of the deformity based on the image of the deformity. Further, the processor determines a deviation between a three-dimensional stereo-typical anatomical alignment and the three- dimensional model of the deformity (i.e., boundary conditions of the deformity) and generates a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient records. The processor also simulates a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment. The simulation generates a projected anatomical alignment post-treatment with the next cast on the deformity using finite element analysis for each of the trial force vectors, and identifies a next force vector from the plurality of trial force vectors that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment, such that using the next force vector for the next cast results in an average differential in the range of 2 mm to 20 mm between a point cloud of the next cast and a point cloud of a prior cast for a similar correction based on prior patient data.
[0019] In exemplary embodiments, the camera is an array of cameras, an x-ray imager, an ultrasound scanner, a three-dimensional scanner, an magnetic resonance imaging device, a CT scan or a combination thereof. For example, the camera can be a depth detection camera or include a light detection and ranging (LIDAR) sensor that produces a point cloud of the deformity.
[0020] In exemplary embodiments, the system can determine boundary conditions for the deformity based on the three-dimensional image of the deformity and medical data from a plurality of persons. The medical data from the plurality of persons can include information about stereo-typical anatomical alignment, anatomical alignment of prior patient deformities prior to correction, intermediate anatomical alignment of patient deformities during correction and final alignment of patient deformities post correction. In exemplary embodiments, the medical data for prior patient deformities can be derived from discarded casts of prior patients used during their treatment and scans of the patient deformities during their treatment. In some embodiments, the medical data for prior patient deformities can be derived from medical records generated during patient visits such as notes from the doctor, 2- dimensional images such as x-rays, measurements of force vectors such as spasticity and the like. In exemplary embodiments, the medical data can be converted to a point cloud that represents the three-dimensional model of the anatomical alignment(s) for the patient, the stereo-typical person or both.
[0021] In exemplary embodiments, the plurality of prior patient data includes an original image of the deformity of the patient(s), intermediate images of the deformity of the patient(s) and an image of the final corrected deformity of the patient(s). In some exemplary embodiments the plurality of prior patient data includes data generated from a plurality of scans or images of prior discarded casts of patients used during treatment of the deformity.
[0022] In exemplary embodiments, to simulate the plurality of trial force vectors for the next cast the system can generate a series of forces at a plurality of nodes in the three-dimensional model of the deformity and determine a resulting change in the anatomical alignment of the deformity after a certain period of time based on the application of the series of forces at the nodes.
[0023] In exemplary embodiments, to select the next force vector from the plurality of trial force vectors the system can determine which of the plurality of trial force vectors minimizes the deviation between the projected anatomical alignment and the next intermediate anatomical alignment based on a finite element analysis machine learning model, wherein the finite element analysis machine learning model is trained on medical data that can include a plurality of point clouds of next force vectors that were selected form the plurality of trial vectors during treatments in prior patients.
[0024] In exemplary embodiments, to select the next force vector the system can minimize the deviation between the projected anatomical alignment and the next intermediate anatomical alignment in multiple directions to reduce the deviation between the three- dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity in multiple directions simultaneously after the next cast is applied to the deformity.
[0025] In exemplary embodiments, to select the next force vector the system can minimize the difference between the projected anatomical alignment and the next intermediate anatomical alignment in a direction with the maximum deviation between the three- dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity after the next cast is applied to the deformity.
[0026] Referring now to FIG. 1 A which illustrates a system 100 to capture an image of the deformity according to the present disclosure is provided. The system 100 includes a camera 102 (shown in FIG. 1 A as an array of cameras) configured to capture a three-dimensional image of the deformity and a plurality of light sources 106. In an exemplary embodiment the system 100 can include a calibration target 110. Examples of the calibration target 110 include checkerboard patterns, socks with or without identification patterns and the like. For example, the calibration target 110 can be a checkerboard pattern that can attached on a flat board that can move relative to the camera 102 to acquire calibration images of the calibration target 110 with various poses relative to the camera 102. In an exemplary embodiment, the system 100 can use a calibration target 110 for subjects where the deformity is kept relatively still. The use of a calibration target with multiple cameras allows the system 100 to capture a three-dimensional images and compensate for movement. [0027] In an exemplary embodiment, the camera 102 can be an array of cameras. The system 100 can generate a three-dimensional image of the deformity by stitching all the images from the array of cameras. In an exemplary embodiment, the camera can be a digital camera, or a video camera, an ultrasound imaging system, MRI or a CT scan. The system 100 can compensate for movement of the subject using image processing to obtain an accurate representation of the deformity in three-dimensions. In exemplary embodiments, the camera 102 can capture be a depth sensing camera (e.g., Microsoft Kinect™), which produces a point cloud of the deformity. In some embodiments, the system 100 can convert a three- dimensional image of the deformity into a point cloud. In exemplary embodiments, the camera 102 can capture imagery data for generating a point cloud of the deformity based on discarded casts used during treatment of a patient. In an exemplary embodiment, the system 100 can acquire an image of the deformity from a mobile device such as a phone or tablet camera. The system 100 can receive an image captured from a mobile device that captures the deformity from different angles. The system 100 can then stitch the images together to create a three-dimensional image.
[0028] In an exemplary embodiment, the system 100 can determine a deviation or average deviation on a point by point basis between a three-dimensional model of the deformity and the three-dimensional stereo-typical anatomical alignment of the human anatomy in question. For example, the system 100 can include a database or corpus storing medical data of the individuals. The medical data of the individuals can include three-dimensional models of stereo-typical anatomical alignment of the human anatomy, three-dimensional models of deformities that deviate from the stereo-typical anatomical alignment based on medical records of prior patients with deformities and the like.
[0029] In exemplary embodiments, the system 100 can use one or more point clouds representing a three dimensional model of a cast or deformity to perform shape analysis instead of the more conventional mesh models. Traditionally, shape analysis methods have operated on solid and surface models of objects, especially tessellated models (i.e., triangular mesh surface models). Shape analysis is concerned with understanding the shape of models geometrically, topologically, and relationally. In some embodiments, the system 100 can use shape analysis to group the deformities in the database based on the type of the deformity, segment the deformities based on the shape into sub-shapes, and find complementary deformities based on the shape of the deformity. For example, the system 100 can query the database with the medical data using a point cloud of a deformity that has been acquired to return a matching point cloud of a similar deformity.
[0030] The system 100 can generate a next intermediate anatomical alignment in a series of three-dimensional alignments based on a machine learning model 116 trained on prior patient data from the database or corpus. In order to train the machine learning model, the system 100 can include a computing device 112. The computing device 112 can include a machine learning trainer 114 to generate the machine learning model 116. In an exemplary embodiment, the system 100 can generate a machine learning model based on supervised learning, unsupervised learning or reinforcement learning. In an exemplary embodiment, the machine learning trainer 114 can be implemented as a machine learning computing device that also stores and allows use of the machine learning model 116. The machine learning trainer 114 can analyze a set of training data that includes a classification of the data that the machine learning trainer 114 can use to calibrate its algorithm to identify what lies within a class or is outside a class. For example, a convolutional neural network or deep learning neural network trained on three-dimensional models of club foot can classify a new three- dimensional model acquired by the system 100 based on the trained machine learning model 116.
[0031] The system 100 can generate the machine learning model 116 that can be used to generate a next intermediate three-dimensional anatomical alignment in a series of intermediate three-dimensional anatomical alignments to correct the orthopedic deformities. For example, the system 100 can receive training data that includes medical data of patients who were treated to correct a deformity such as three-dimensional images of the uncorrected deformity, three-dimensional images of the intermediate anatomical alignments achieved during treatment of the deformity, and the three-dimensional images of the final corrected anatomical alignment achieved after treatment. In some embodiments, the training data can also include three-dimensional anatomical alignments of stereo-typical persons. In some embodiments, the training data can be generated from discarded casts of patients who were treated for a deformity. The system 100 can use the prior discarded casts to approximate the deformity at each stage of the correction process where the three-dimensional images of the foot are not available. When treating clubfoot, the training data can include patient data obtained during the treatment of a patient using the Ponseti method. In an exemplary embodiment, machine learning models analyze data from a plurality of prior patients to identify mean shapes and shape variations. The system 100 can then determine boundary conditions of the machine learning model 116 to classify a new three-dimensional surface model of a deformity acquired from a new patient to determine whether the deformity of the new patient falls within the boundary of a particular type of deformity or to identify a similar deformity in the prior medical records that closely match the shape, type or both. The system 100 can then determine a next three-dimensional intermediate anatomical alignment in a series of anatomical alignments based on the prior identified deformity.
[0032] In an exemplary embodiment, the machine learning model 116 can be trained to output the desired correction angles to correct the deformity to the next intermediate alignment, i.e., the desired angles of correction that will result in the next three-dimensional anatomical alignment for correcting the deformity.
[0033] In an exemplary embodiment, the system 100 can train the machine learning model based on prior patient data for a plurality of patients such an original three-dimensional image of the deformity, images of intermediate stages of correction of the deformity and the final image of the corrected deformity. In exemplary embodiments, the system can use three- dimensional scans of prior discarded casts of patients to determine the original deformity, stages of correction of the deformity and the final corrected deformity.
[0034] The system 100 can simulate trial force vectors for a next cast that can correct the deformity to the next intermediate anatomical alignment using finite element analysis at the nodes of the three-dimensional model of the deformity. The system 100 can simulate the projected anatomical alignment for each trial force vector post-treatment with the next cast based on finite element analysis of the three-dimensional model of the deformity. For example, the system 100 can apply finite element analysis with the trial force vectors acting at the nodes of the three-dimensional model of the deformity to determine the projected anatomical alignment post-treatment with the next cast.
[0035] In an exemplary embodiment, the system 100 can generate training data for the machine learning model based on modelling and analysis software such as ANSYS. Modelling and simulation software can be used to deform a 3D three-dimensional CAD model of a normal foot into a plurality of virtually generated CAD models (e.g., 500 models), for example, clubfoot CAD models, with different degrees and angles of deformity potentially seen during the correction sequence. In an embodiment, system 100 can use supervised learning and the system 100 can receive inputs from an orthopedic surgeon (e.g., pediatric orthopedic surgeon) to review the models for accuracy. The system 100 can export the CAD models as point cloud models for anatomical classification/labeling of the generated models for the machine learning model. The system 100 can use the point cloud model of the foot to identify the severity of the clubfoot deformity by determining the amount of deviation of the foot with respect to normal pose in four different directions as shown in FIG. IB. FIG. IB illustrates the deformities in clubfoot such as the equinus deformity 124, the varus deformity 126, the calcancopedal derotation 128 and the horizontal plane deformity relative to hindfoot 130. The system 100 can capture the variation in these deformities using the camera 102. In an exemplary embodiment, the machine learning model can be evolved to improve the accuracy of the model over time.
[0036] The system 100 can use a deep learning method such as a PointNet to process the point cloud models. PointNet is an open source platform for classification of point cloud models. Since the point cloud model is randomly oriented, they use a bounding box that fits into the model, and normalizes the point cloud to always align the point cloud model in a certain direction before feeding it into the deep learning network as input datasets. The system 100 can use PointNet to classify different stages of an orthopedic skeletal deformity, for example, clubfoot deformity. In an exemplary embodiment, the system 100 can use the point cloud CAD models generated using simulation software to train a deep learning network to objectively classify and label each patient’s unique foot deformity compared to a normal foot. The system 100 can then train the network to predict the cast series for each subject patient in this study. The system 100 can use supervised learning based on inputs obtained by presenting an orthopedic skeletal deformity model, for example, a clubfoot model, to one or more orthopedic surgeons. In an exemplary embodiment, the models can be presented with a selection of candidate foot correction models (e.g., out of five hundred foot models) that is the next in the correction series based on the Ponseti method. The system 100 can then receive inputs from the doctors on a consensus basis and select the next correction phase out of the selection of candidate foot models (e.g., 10 models). Over the course of multiple rounds of selection (e.g., 500 rounds), the system 100 trains the deep learning network to search the training dataset and output the subsequent cast for deformity correction.
[0037] In an exemplary embodiment, the system 100 can generate an STL file for three- dimensional printing using a three-dimensional printer 122.
[0038] The system 100 can select a next force vector for the next cast from the trial force vectors by identifying the force vector that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment. In accordance with the teachings herein, using the next force vector in the next cast results in an average difference in the range of 2 mm to 20 mm between a point in a point cloud of the next cast and a corresponding point in a point cloud of a prior cast for a similar correction in a prior patient data. The system 100 can determine the shape and dimensions of the next cast within a range of 2 mm to 20 mm when compared with a trained and skilled doctor with years of experience performing the casting using subjective parameters.
[0039] In exemplary embodiments, the system 100 can select a next force vector for the next cast from the trial force vectors by identifying the force vector that minimizes the difference between the projected anatomical alignment and the next intermediate anatomical alignment. In accordance with the teachings herein, the next force vector for the next cast results in an average difference in the range of 0.5 degrees to 4 degrees between the angles in a point cloud of the next cast and the corresponding angles in a point cloud of a prior cast for a similar correction in a prior patient data.
[0040] Referring to FIG. 1C the system 100 can determine the dimensions and shape of the next cast in a series of casts for correcting the deformity to a high degree of accuracy without the subjective judgement of a highly skilled doctor. The system 100 can generate a point cloud 138 of the three-dimensional scan of the cast 136 of a prior patient as shown in FIG.
1C. The system 100 can then determine a simulated angle of derotation 140. For a given deformity FIG. 1C illustrates a point cloud of the simulated cast 134 generated by the system 100 when compared to the point cloud of a real cast 132 placed by a highly skilled orthopedic to correct the deformity. When using the system 100 to generate the next cast, the average difference between the dimensions of the point cloud of the simulated cast 134 and the point cloud of the real cast 132 can be in the range of 2 mm to 20 mm on a point to point basis between corresponding points in a point cloud of the simulated next cast 134 and a point cloud of a prior cast 132 for a similar correction in a prior patient data. When using the system 100 to generate the next cast, the average difference can be in the range of 0.5 degrees to 4 degrees between the angles in a point cloud of the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for a similar correction in a prior patient. In an exemplary embodiment, the average difference between the angles of the real 132 and simulated casts 134 was 0.93 degrees for equinus and 0.74 degrees for de-rotation. In an exemplary embodiment, the average difference in three-dimensional point clouds between the corresponding points of the real cast 132 and simulated casts 134 was 2.305 mm. [0041] Referring to FIG. ID the table represents the difference between the angles on the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for similar correction in a prior patient. For example, the table illustrates that the difference between the angles in a point cloud of the simulated next cast 134 and the corresponding angles in a point cloud of a prior cast 132 for Equinus correction 136, de-rotation correction 138 and the average difference 140 between point clouds.
[0042] Referring to FIG. 2. the system 100 can generate a cast or a series of casts as shown in Fig. 2 to correct the orthopedic deformity. In an exemplary embodiment the system 100 can generate a cast or a series of casts that can be three-dimensional printed. The system 100 determines a force vector or a set of force vectors to correct the orthopedic deformity based on three-dimensional imagery of the deformity and the machine learning model 116 can generate the next intermediate anatomical alignment for correcting the deviation. In an exemplary embodiment, next intermediate anatomical alignment can be the desired corrected angles for correcting the deformity in the next cast. In an exemplary embodiment, the system 100 can determine the shape and geometry of a cast or a series of casts 202-210 that exert the determined force vector or set of force vectors that are tailored to the patient. In an exemplary embodiment, the system 100 can determine the force vectors for the series of casts 202-210 to correct the deformed foot 214 with three-dimensional deformities shown along the x, y and z axis to arrive at the corrected foot 212. In an exemplary embodiment, the system 100 selects the force vector or set of force vectors such that the right areas 216 that are structurally designed in normal foot of children to distribute the load when walking is in the same plane and perform the load bearing function once corrected.
[0043] It can be appreciated that, depending on the baseline anatomical shape and arrangement and an anatomical rearrangement goal, or target, an appropriate serial casting strategy can be developed. For instance, not all patients may need the same number of casts.
In fact, it may be that a patient requires fewer casts as deformities to the internal anatomy of the foot may be less severe. In other cases, the deformity to the underlying anatomy may be significant and more casts may be prescribed. Being able to combine this internal information, however, with exterior data of the surface of the foot allows for generation of three-dimensional printed ‘corrective’ casts that are patient-specific.
[0044] Referring now to FIG. 3A, 3B, 3C, and 3D the system 100 can determine the reference points for generating the corrective plane. FIG. 3 A and 3B illustrates the reference points for generating the corrective planes. FIG. 3C illustrates before correction cavus image 310 and after correction cavus image 312 generated by the system 100 using finite element analysis. FIG. 3C also illustrates the before correction Adductus image 314 and the after correction Adductus image 316 generated by the system 100 using finite element analysis. FIG. 3D and FIG. 3E illustrates the before correction Varus image 318 and the after correction Varus image 320 (from two different points of view) generated by the system 100 using finite element analysis.
[0045] In an exemplary embodiment, the system 100 can determine the reference points for an adductus deformity based on the big toe 305, little toe 303, and ankle 301 as shown in a three-dimensional image of the adductus deformity 302 in FIG. 3A. In an exemplary embodiment, the system 100 can use these reference points to generate the reference plane 323 as shown in FIG. 3B. The system 100 can use the reference plane 323 to determine the force vectors to correct the adductus deformity 314, 316 as shown in FIG. 3C.
[0046] The system 100 can determine the reference points for correcting an equinus deformity based on the ankle 311, heel 313, mid plantar of foot 315, and thigh 307 as shown in a three-dimensional image of the equinus deformity 304 as shown in FIG 3A. In an exemplary embodiment, the system 100 can use these reference points to generate the reference plane 325 to determine the force vectors to correct the equinus deformity as shown in FIG. 3B. The system 100 can then determine the force vector or set of force vectors to correct an equinus deformity based on the reference plane 327.
[0047] The system 100 can determine the reference points for correcting a cavus deformity based on the big toe 319, middle of toes 317, pinky toe 321 and heal 315 as shown in a three- dimensional image of the cavus deformity 306 in FIG. 3A. The system 100 can determine the reference plane 323 based on these reference points as shown in FIG 3B. The system 100 can then determine the force vector or set of force vectors to correct the cavus deformity 310, 312 as shown in FIG 3C.
[0048] The corresponding FIG. 3B illustrates identification of four different planes using these reference points shown in FIG. 3 A. In an exemplary embodiment system 100 can determine these reference points using image processing algorithm or a machine learning algorithm that recognizes features of the limb. In an exemplary embodiment, the machine learning model can be trained on deformed foot to identify these features.
[0049] With reference to FIG. 4 A, 4B and 4C, the system 100 can be configured to determine the boundary conditions of the deformity based on a machine learning model. In an exemplary embodiment, the boundary conditions of the deformity can be the desired angles for corrected foot 402. FIG. 4A illustrates the process of generating a final model of force vectors for correcting a deformity based on the boundary conditions and the 3d model of the deformity. FIG 4B illustrates the process of generating a solid model based on 3d scans. FIG. 4C illustrates the force vectors at the reference points determined by the system 100 in accordance with an exemplary embodiment described herein.
[0050] The system 100 can obtain a three-dimensional image or data 404 of the deformity. In exemplary embodiments, the system 100 can generate a three-dimensional model of the deformity either as a solid object or as a point cloud. FIG. 4B illustrates a method of generating a three-dimensional solid object in a modelling software based on the images of the foot. The system 100 can convert the three-dimensional scan images of the deformity into an STL file 414. The system 100 can then post-process the STL file 414 to fill in any missing information using a post-processing tool (e.g., Spacclaim). The system 100 can then convert the post-processed file into a solid three-dimensional object in the modelling software for further analysis.
[0051] Returning to FIG. 4 A, the system 100 can determine multiple separate planes for the deformity as shown in 406 to serve as a reference between the position of the deformity and the expected or normal mean position of the limb or other appendage. The system 100 can select the number of planes based on the geometry of the deformity, the degrees of freedom of the deformity, the deviation of the deformity from a statistical normal limb or appendage and the like. In an exemplary embodiment, the system 100 can determine the reference points as described above with reference to FIG. 3 AGE. In an exemplary embodiment, the system 100 can select four different planes based on a machine learning model for club foot. In another example, the four planes can be selected with inputs from a doctor. The system 100 can use the machine learning trainer 114 to determine a four plane machine learning model that identifies the appropriate planes to use for correction.
[0052] For example, the machine learning trainer 114 can use data from a plurality of prior patients that includes planes that were selected for correction for the patients compared and the geometry of the deformity and the outcome of the corrective effort. The system 100 can then fit the three-dimensional data 404 of the deformity based on the trained four plane machine learning model. Once the four planes are identified the system 100 can use finite element analysis 408 to determine the force vectors for correcting the deformity in each plane. [0053] In an exemplary embodiment, the system 100 can determine the force vectors at the reference points as illustrated in FIG. 4A and FIG. 4C using finite element analysis. The system 100 can determine the force vectors 422 as shown in FIG. 4C at the reference points for correcting the deformity. In an exemplary embodiment, the system 100 can apply a corrective force and determine the predicted correction such as the predicted angles for the corrected foot based on the applied force. The system 100 can then compare it with the boundary conditions such as desired angles for the corrected foot. The system 100 can iterate or simulate 410 for various force corrections then update the force vectors to reduce the error or minimize the error.
[0054] In an exemplary embodiment, the system 100 can use the boundary conditions 402 and the angle between the four planes and the boundary conditions 402 to determine the force vectors required during finite element analysis for correcting the deformity in each plane and generating for the next cast in the series of casts. In an exemplary embodiment the system 100 can determine the angle between the four planes using the modelling tools (e.g., Ansys, Matlab or both). Although the FIG. 4A illustrates the use of two modelling tools (e.g., Ansys and Matlab), to perform the various methods, in an exemplary embodiment the system 100 can use one or more modelling tools to perform the various methods.
[0055] For example, the system 100 can use the boundary conditions to iterate through a series of force vectors to minimize the error between the boundary condition and the results of applying a particular force vector in a particular plane. The system 100 can run a series of simulations using a trial correction and then determine the probable corrected deformity. The system 100 can as shown in the FIG. 4 A iterates over a number of simulations until a force vector or a set of force vectors for the next series cast such as final model 412 is obtained.
The system can determine the force vector or set of force vectors with the minimum deviation from the boundary condition using the iterative process. In an exemplary embodiment when the angle between the boundary conditions 402 and the probable corrected deformity is minimal the error is minimum. In an exemplary embodiment the system 100 can track the angle between the boundary condition and the predicted or estimated corrected plane if a force vector or set of force vectors is applied for each simulation in real-time 412.
[0056] In an exemplary embodiment, the system 100 can generate data for split casts based on the final model 412. In an exemplary embodiment, the split cast can include a portion that is not changed during at least a part of the series of casts and a portion that is updated during the next cast in the series of casts. [0057] In exemplary embodiments, the system 100 can use the machine learning trainer 114 to determine the finite element analysis machine learning model. The finite element analysis machine learning model can be based force vectors that correspond to points in the point cloud determined from prior simulations for a plurality of prior patients. For example, the data for the plurality of prior patients can include the force vectors generated using simulations for a next cast in a series of casts, the boundary conditions used to arrive at the force vectors and the correction achieved as evident from the subsequent three-dimensional image of the deformity after it was corrected with the cast can be used to train the finite element analysis machine learning model. In exemplary embodiments, the finite element analysis machine learning model can generate a force vector or a set of force vectors for a cast or a series of casts based on the boundary conditions given manually or obtained from the finite element analysis machine learning model.
[0058] The system 100 can run supervised learning, unsupervised learning, reinforcement learning algorithms or any combination thereof. Examples of machine learning algorithms that can be implemented via the computing device 112 can include, but are not limited to Linear Regression, Logical Regression, Decision Tree, Support Vector Machine, Naive Bayes, k-Nearest Neighbors, k-Means, Random Forest, Dimensionality Reduction algorithms such as GBM, XGBoost, LightGBM and CatBoost.
[0059] Examples of supervised learning algorithms that can be used in the computing device 112 can include regression, decision tree, random forest, k-Nearest Neighbors, Support Vector Machine, and Logic Regression. Examples of unsupervised learning algorithms that may be used in the computing device 112 include apriori algorithm and k-means. Examples of reinforcement learning algorithms that may be used in computing device 112 includes a Markov decision process.
[0060] Referring to Fig. 5, the system 100 can apply finite element analysis manually. At step 502 the system 100 can generate a three-dimensional scanned stereolighography (STL) file based on the three-dimensional image from the camera 102. At step 504, the STL file can be post processed to clean up any irregularities. For example, the system 100 can remove any imperfections in the STL file such as from motion during capture of the three-dimensional image using image processing algorithms. At step 506, the system 100 can convert the three- dimensional image to a three-dimensional solid model. For example, the system 100 can use the multiple points present in the STL file and generate a solid shape of the deformity that are connected using extrapolations to generate a surface instead of multiple discrete points. At step 508, the system 100 can create reference points for generating the planes for finite element analysis of the force vector or set of force vectors to be applied to the deformity to correct the deformity. In an exemplary embodiment, the system 100 can receive a selection of reference points for generating a correction plane from the doctor. In another example, a machine learning algorithm can select the reference points based on a trained machine learning model as described herein above. At step 510, the system 100 can simulate the application of the force vector to the deformity and the effect of the force vector on the points of support for the deformity. At step 512, the system 100 can calculate the deformation and the stresses when the force vector is applied. For example, the system 100 can determine the deformation of the deformity and the stresses on the deformity when the force vector or set of force vectors is applied via a cast. At step 516, the system can calculate the deformation on the reference points for plane creation. For example, the system 100 can determine the deformation on the chosen reference points in the deformity to determine the effect of the force vectors on the deformity. At step 514, the system 100 can calculate the angle between the planes of the selected reference place and the desired boundary condition 502. And at step 518, the system can convert the generated final model data into a next cast in the series of casts. The system 100 can convert the final model into an STL file for three-dimensional printing.
[0061] Referring now to FIG. 6A, the method of the present disclosure will now be described with reference to the flowchart. At step, 660 of process 655, three-dimensional images of the deformity can be acquired. In an embodiment, the three-dimensional images of the deformity can be acquired by a mobile device of a parent. The three-dimensional imaging can include depth mapping of the foot of the patient. In an embodiment, the three-dimensional imaging can include ultrasound for the determination of internal biological structures of the foot. In an exemplary embodiment, the combination of the two above-described three-dimensional imaging modalities allows for improved cast planning by considering the internal structures in addition to the outward appearance.
[0062] At sub process 665 of process 655 and based upon the acquired three-dimensional images of the patient anatomy, casting stages of patient anatomy movement can be predicted. The casting stages can be predicted via the force vector modeling described herein above. In one instance, this prediction can include computer predictive modeling and finite element analysis of the foot wherein stresses, deformations of the structures of the foot or both are considered from one stage to the next. Sub process 665 will be further described with reference to FIG. 6B. At step 670 of process 655, virtual models of three-dimensional casts can be generated for anticipated patient anatomy movements at each predicted stage. Such virtual models of three-dimensional casts can allow for visualization and modification according to real-world constraints. At step 675 of process 650, three-dimensional casts, similar to that of FIG. 5, can be generated for each virtual model at each predicted stage of patient anatomy movement.
[0063] With reference to FIG. 6B, sub process 665 of process 655 includes determining the number of stages and trajectory of each predicted stage of movement. At step 666 of sub process 665, a baseline patient 5 anatomy can be established according to the acquired images of the patient anatomy. Accordingly, at step 668 of sub process 665, a target patient anatomy can be selected, the target being an end goal shape of the structure of the foot.
[0064] At step 667 of sub process 665, the patient anatomy movement at each stage can be determined. This determination can include movements of structures of the foot. In an embodiment, such movements can be determined in the context of the Ponseti stages and include, for instance, performing specific angular rotations at specific stages. In an embodiment, such movements can be optimized at each stage such that maximum movement is achieved without creating undue mechanical and/or biological stresses. For instances, each stage may be determined such that von Mises stress, for instance, remain below a threshold value.
[0065] At step 669 of sub process 665, the current position of the patient anatomy can be compared with the target patient anatomy position from step 668. If the two values are equal, for instance, only a single stage of casting may be required and the determined patient anatomy movement can be used to generate a virtual model of a necessary three-dimensional cast at step 670. If, however, the current position and the target patient anatomy do not match, a successive stage of patient anatomy movement is required and the sub process 665 returns to step 667.
[0066] According to an embodiment, in this way, the number of stages, or cast, required to be fitted to a patient is dependent upon the severity of the deformity and the ability to move the patient anatomy at each stage. In the case of clubfoot, this can mean the difference of manufacturing four casts in one instance and six casts in another, thereby allowing each patient to receive only the minimum necessary number of casts. [0067] According to an embodiment, the above described method of FIG. 6 A and FIG. 6B can be performed with only external features gathered via, for instance, depth mapping data. External features can be processed similarly to Schoenecker, et al., Systems and methods for serial treatment of a muscular-skeletal deformity, U.S. Patent Application Publication No. US2017/0091411 Al, incorporated herein by reference.
[0068] According to an embodiment, the external features can be applied to a machine learning algorithm in order to generate patient anatomy predictions without need for ultrasound imaging. For instance, a library of corresponding images of a foot may be stored.
[0069] The corresponding images can include images of the external features of the foot and corresponding images of the internal features of the foot. In this way, the machine learning algorithm, a convolutional neural network in exemplary embodiment, can be trained to correlate external features with internal features. Therefore, when provided with an external feature of an unknown foot, the machine learning algorithm can generate a corresponding internal feature structure that can be used in determining patient anatomy movements during stage planning. The library of corresponding images can be a corpus of patient data acquired from patients of a similar diagnosis and healthy patients.
[0070] FIG. 7 is a block diagram of an exemplary embodiment of computing device 112 in accordance with embodiments of the present disclosure. The computing device 112 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing exemplary embodiments. The non-transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like. For example, memory 119 included in the computing device 112 can store computer-readable and computer-executable instructions or software for performing the operations disclosed herein. For example, the memory 119 can store a software application 640 which is configured to perform several of the disclosed operations (e.g., the pre-training platform for determining the co-occurrence matrix, the training platform for determining the word vectors and the topic determination platform for determining the plurality of topics and the representative noun). The computing device 610 can also include configurable, programmable processor 120 or both and an associated core(s) 614, and optionally, one or more additional configurable, programmable processing devices or both, e.g., processor(s) 612’ and associated core(s) 614’ (for example, in the case of computational devices having multiple processors/cores), for executing computer-readable and computer-executable instructions or software application 640 stored in the memory 119 and other programs for controlling system hardware. Processor 120 and processor(s) 612’ can each be a single-core processor or multiple core (614 and 614’) processor.
[0071] Virtualization can be employed in the computing device 610 so that infrastructure and resources in the computing device can be shared dynamically. A virtual machine 624 can be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
[0072] Memory 119 can include a computational device memory or random access memory, such as DRAM, SRAM, EDO RAM, and the like. Memory 119 can include other types of memory as well, or combinations thereof.
[0073] A user can interact with the computing device 710 (shown in FIG. 1 as 112) through a visual display device 701, such as a computer monitor, which can display one or more user interfaces 742 that can be provided in accordance with exemplary embodiments. The computing device 710 can include other EO devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 718, a pointing device 720 (e.g., a mouse). The keyboard and the pointing device 720 can be coupled to the visual display device 701. The computing device 710 can include other suitable conventional EO peripherals.
[0074] The computing device 710 can also include one or more storage devices such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer- readable instructions, software that perform operations disclosed herein or both. Exemplary storage device 734 can also store one or more databases for storing any suitable information required to implement exemplary embodiments. The databases can be updated manually or automatically at any suitable time to add, delete, update one or more items in the databases.
[0075] The computing device 710 can include a communication device 744 configured to interface via one or more network devices 732 with one or more networks, for example,
Local Area Network (LAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.11, Tl, T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The communication device 744 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem, radio frequency transceiver, or any other device suitable for interfacing the computing device 710 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 710 can be any computational device, such as a workstation, desktop computer, server, laptop, handheld computer, tablet computer, or other form of computing or telecommunications device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
[0076] The computing device 710 can run any operating system 726, such as any of the versions of the Microsoft® Windows® operating systems, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the computing device and performing the operations described herein. In exemplary embodiments, the operating system 726 can be run in native mode or emulated mode. In an exemplary embodiment, the operating system 726 can be run on one or more cloud machine instances.
[0077] In describing exemplary embodiments, specific terminology is used for the sake of clarity. For purposes of description, each specific term is intended to at least include all technical and functional equivalents that operate in a similar manner to accomplish a similar purpose. Additionally, in some instances where a particular exemplary embodiment includes a plurality of system elements, device components or method steps, those elements, components or steps may be replaced with a single element, component or step. Likewise, a single element, component or step may be replaced with a plurality of elements, components or steps that serve the same purpose. Moreover, while exemplary embodiments have been shown and described with references to particular embodiments thereof, those of ordinary skill in the art will understand that various substitutions and alterations in form and detail may be made therein without departing from the scope of the invention. Further still, other aspects, functions and advantages are also within the scope of the invention.
[0078] Exemplary flowcharts are provided herein for illustrative purposes and are non limiting examples of methods. One of ordinary skill in the art will recognize that exemplary methods may include more or fewer steps than those illustrated in the exemplary flowcharts and that the steps in the exemplary flowcharts may be performed in a different order than the order shown in the illustrative flowcharts.

Claims

What is claimed is:
1. A system for modelling of force vectors for a cast to correct orthopedic deformities, the system comprising a processor is programmed to execute machine readable instructions to: receive an image of a deformity; generate a three-dimensional model of the deformity based on the image of the deformity; determine a deviation between a three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity; generate a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient data; simulate a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment, wherein the simulation produces a projected anatomical alignment post-treatment with the next cast on the deformity based on finite element analysis for each of the trial force vectors; and identify a next force vector from the plurality of trial force vectors that minimizes a difference between the projected anatomical alignment and the next intermediate anatomical alignment, such that using the next force vector for the next cast results in an average differential in a range of 2 mm to 20 mm between a point in a point cloud of the next cast and a corresponding point in a point cloud of a prior cast for a similar correction.
2. The system of claim 1, wherein the system includes a camera.
3. The system of claim 2, wherein the camera is an array of cameras, an ultrasound, a three-dimensional scanner, an magnetic resonance imaging device, a CT scan.
4. The system of claim 1, wherein the plurality of prior patient data including an original image of the deformity of the patients, intermediate images of the deformity of the patients and an image of a final corrected deformity of the patient.
5. The system of claim 1, wherein the plurality of prior patient data includes data generated from a plurality of scans of prior discarded casts of patients used during treatment of the deformity.
6. The system of claim 1, wherein the three-dimensional model of the deformity is a point cloud.
7. The system of claim 1, wherein to simulate the plurality of trial force vectors for the next cast the processor is programmed to: generate a series of forces at a plurality of nodes in the three-dimensional model of the deformity; and determine a resulting change in the anatomical alignment of the deformity after a certain period of time based on an application of the series of forces at the nodes.
8. The system of claim 1, wherein to select the next force vector from the plurality of trial force vectors the processor is programmed to: determine which of the plurality of trial force vectors minimizes the deviation between the projected anatomical alignment and the next intermediate anatomical alignment based on a finite element analysis machine learning model, wherein the finite element analysis machine learning model is trained on medical data that includes a plurality of point clouds of next force vectors that were selected form the plurality of trial vectors during treatments in prior patients.
9. The system of claim 1, wherein to select the next force vector the system is programmed to: minimize the deviation between the projected anatomical alignment and the next intermediate anatomical alignment in multiple directions to reduce the deviation between the three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity in a plurality of directions simultaneously after the next cast is applied to the deformity.
10. The system of claim 1, wherein to select the next force vector the system is programmed to: minimize the difference between the projected anatomical alignment and the next intermediate anatomical alignment in a direction with the maximum deviation between the three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity after the next cast is applied to the deformity.
11. A method for modelling of force vectors for serial casts to correct orthopedic deformities, the method comprising: receiving, via a server, an image of a deformity; generating, via the server, a three-dimensional model of the deformity based on the image of the deformity; determining, via the server, a deviation between a three-dimensional stereotypical anatomical alignment and the three-dimensional model of the deformity; generating, via a machine learning server, a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient data; simulating, via the server, a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment, wherein the simulation produces a projected anatomical alignment post-treatment with the next cast on the deformity using finite element analysis for each of the trial force vectors; and identifying, via the machine learning server, a next force vector from the plurality of trial force vectors that minimizes a difference between the projected anatomical alignment and the next intermediate anatomical alignment, such that using the next force vector for the next cast results in an average differential in a range of 2 mm to 20 mm between a point in a point cloud of the next cast and a corresponding point in a point cloud of a prior cast for a similar correction.
12. The method of claim 11, wherein the method further comprises: generating, via the server, a series of forces at a plurality of nodes in the three- dimensional model of the deformity; and determining, via the server, a resulting change in the anatomical alignment of the deformity after a certain period of time based on an application of the series of forces at the nodes.
13. The method of claim 11, wherein the method further comprises: minimize the deviation between the projected anatomical alignment and the next intermediate anatomical alignment in a plurality of directions to reduce the deviation between the three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity in a plurality of directions simultaneously after the next cast is applied to the deformity.
14. The method of claim 11, wherein to select the next force vector the method further comprises: minimize the difference between the projected anatomical alignment and the next intermediate anatomical alignment in a direction with the maximum deviation between the three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity after the next cast is applied to the deformity.
15. The method of claim 11, wherein the plurality of prior patient data including an original image of the deformity of the patients, intermediate images of the deformity of the patients and an image of a final corrected deformity of the patient.
16. The method of claim 11, wherein the plurality of prior patient data includes data generated from a plurality of scans of prior discarded casts of patients used during treatment of the deformity.
17. A non-transitory computer readable medium storing instructions executable by a processing device, wherein execution of the instructions causes the processing device to implement a method for modelling of force vectors for serial casts to correct orthopedic deformities, the method comprising: receiving, via a server, an image of a deformity; generating, via the server, a three-dimensional model of the deformity based on the image of the deformity; determining, via the server, a deviation between a three-dimensional stereo-typical anatomical alignment and the three-dimensional model of the deformity; generating, via a machine learning server, a next intermediate anatomical alignment in a series of intermediate anatomical alignments for correcting the deviation based on a machine learning model trained on a plurality of prior patient data; simulating, via the server, a plurality of trial force vectors for a next cast to correct the deformity to the next intermediate anatomical alignment, wherein the simulation produces a projected anatomical alignment post-treatment with the next cast on the deformity based on finite element analysis for each of the trial force vectors; and identifying, via the machine learning server, a next force vector from the plurality of trial force vectors that minimizes a difference between the projected anatomical alignment and the next intermediate anatomical alignment, such that using the next force vector for the next cast results in an average differential in a range of 2 mm to 20 mm between a point in a point cloud of the next cast and a corresponding point in a point cloud of a prior cast for a similar correction in a prior patient data.
18. The non-transitory computer readable medium of claim 17, wherein the plurality of prior patient data including an original image of the deformity of the patients, intermediate images of the deformity of the patients and an image of a final corrected deformity of the patient.
19. The non-transitory computer readable medium of claim 17, wherein the plurality of prior patient data includes data generated from a plurality of scans of prior discarded casts of patients used during treatment of the deformity.
PCT/US2020/058597 2019-10-31 2020-11-02 Predictive modeling platform for serial casting to correct orthopedic deformities WO2021087483A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20812183.0A EP4052273A1 (en) 2019-10-31 2020-11-02 Predictive modeling platform for serial casting to correct orthopedic deformities

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201962928808P 2019-10-31 2019-10-31
US62/928,808 2019-10-31
US16/864,099 2020-04-30
US16/864,099 US11783102B2 (en) 2019-04-30 2020-04-30 Predictive modeling platform for serial casting to correct orthopedic deformities
US17/073,183 2020-10-16
US17/073,183 US11741277B2 (en) 2019-04-30 2020-10-16 Predictive modeling platform for serial casting to correct orthopedic deformities

Publications (1)

Publication Number Publication Date
WO2021087483A1 true WO2021087483A1 (en) 2021-05-06

Family

ID=75715432

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/058597 WO2021087483A1 (en) 2019-10-31 2020-11-02 Predictive modeling platform for serial casting to correct orthopedic deformities

Country Status (2)

Country Link
EP (1) EP4052273A1 (en)
WO (1) WO2021087483A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529652A (en) * 2022-04-24 2022-05-24 深圳思谋信息科技有限公司 Point cloud compensation method, device, equipment, storage medium and computer program product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294271A1 (en) * 2013-03-29 2014-10-02 Case Western Reserve University Image Similarity-Based Finite Element Model Registration
CA3089031A1 (en) * 2013-12-09 2015-06-18 Mohamed R. Mahfouz A method of calibrating one or more inertial measurement units
WO2016102027A1 (en) * 2014-12-24 2016-06-30 Mobelife N.V. Method of using a computing device for providing a design of an implant
US20170091411A1 (en) 2015-09-24 2017-03-30 Vanderbilt University Systems and methods for serial treatment of a muscular-skeletal deformity
CA3008479A1 (en) * 2015-12-16 2017-06-22 Mohamed R. Mahfouz Imu calibration
WO2019157486A1 (en) * 2018-02-12 2019-08-15 Massachusetts Institute Of Technology Quantitative design and manufacturing framework for a biomechanical interface contacting a biological body segment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140294271A1 (en) * 2013-03-29 2014-10-02 Case Western Reserve University Image Similarity-Based Finite Element Model Registration
CA3089031A1 (en) * 2013-12-09 2015-06-18 Mohamed R. Mahfouz A method of calibrating one or more inertial measurement units
WO2016102027A1 (en) * 2014-12-24 2016-06-30 Mobelife N.V. Method of using a computing device for providing a design of an implant
US20170091411A1 (en) 2015-09-24 2017-03-30 Vanderbilt University Systems and methods for serial treatment of a muscular-skeletal deformity
CA3008479A1 (en) * 2015-12-16 2017-06-22 Mohamed R. Mahfouz Imu calibration
WO2019157486A1 (en) * 2018-02-12 2019-08-15 Massachusetts Institute Of Technology Quantitative design and manufacturing framework for a biomechanical interface contacting a biological body segment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529652A (en) * 2022-04-24 2022-05-24 深圳思谋信息科技有限公司 Point cloud compensation method, device, equipment, storage medium and computer program product

Also Published As

Publication number Publication date
EP4052273A1 (en) 2022-09-07

Similar Documents

Publication Publication Date Title
JP7466928B2 (en) Artificial intelligence intraoperative surgical guidance systems and methods of use
US11741277B2 (en) Predictive modeling platform for serial casting to correct orthopedic deformities
Sarkalkan et al. Statistical shape and appearance models of bones
AU2019398314B2 (en) Bone density modeling and orthopedic surgical planning system
Jiménez-Delgado et al. Computer assisted preoperative planning of bone fracture reduction: simulation techniques and new trends
US11763603B2 (en) Physical activity quantification and monitoring
Bucki et al. A fast and robust patient specific finite element mesh registration technique: application to 60 clinical cases
CA2905230A1 (en) Planning systems and methods for surgical correction of abnormal bones
Taghizadeh et al. Biomechanical role of bone anisotropy estimated on clinical CT scans by image registration
JP2021529055A (en) Implant goodness of fit analysis
CN114332378B (en) Human skeleton three-dimensional model acquisition method and system based on two-dimensional medical image
WO2021087483A1 (en) Predictive modeling platform for serial casting to correct orthopedic deformities
Ajemba et al. Characterizing torso shape deformity in scoliosis using structured splines models
Zhang et al. Intelligent measurement of spinal curvature using cascade gentle AdaBoost classifier and region-based DRLSE
Tufegdzic et al. Predictive geometrical model of the upper extremity of human fibula
CN114300143A (en) Hierarchical scheme prediction method, system, device and storage medium
Tapp Generation, Analysis, and Evaluation of Patient-Specific, Osteoligamentous, Spine Meshes
Tajdari Bio-Informed Image-Based Deep Learning Frameworks for Prognosis of Pediatric Spinal Deformities
Ha et al. 2D-3D Reconstruction of a Femur by Single X-Ray Image Based on Deep Transfer Learning Network
Audenaert Modelling human anatomy for use in clinical and virtual population studies
Assi et al. Kondo Claude Assi, 2013
Harith Clinically-based automatic implant fitting for shape optimization of fracture fixation plates

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20812183

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020812183

Country of ref document: EP

Effective date: 20220531