WO2020123988A1 - System and method for motion based alignment of body parts - Google Patents

System and method for motion based alignment of body parts Download PDF

Info

Publication number
WO2020123988A1
WO2020123988A1 PCT/US2019/066303 US2019066303W WO2020123988A1 WO 2020123988 A1 WO2020123988 A1 WO 2020123988A1 US 2019066303 W US2019066303 W US 2019066303W WO 2020123988 A1 WO2020123988 A1 WO 2020123988A1
Authority
WO
WIPO (PCT)
Prior art keywords
imu
orientation
sensor
data samples
gyroscope
Prior art date
Application number
PCT/US2019/066303
Other languages
French (fr)
Inventor
Furrukh KHAN
Xiaoxi ZHAO
Original Assignee
Solitonreach, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Solitonreach, Inc. filed Critical Solitonreach, Inc.
Publication of WO2020123988A1 publication Critical patent/WO2020123988A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1654Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with electromagnetic compass
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/183Compensation of inertial measurements, e.g. for temperature effects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories

Definitions

  • Motion capture devices frequently require the use of body suits or harnesses, which can diminish user experience. Moreover, even with the complexity of such motion capture devices, the motion capture devices frequently generate excessive amounts of sensor drift, i.e., the sensor erroneously detects motion when the user is stationary.
  • Motion capture devices may enable a surgical robot to assist a physician to properly align a cut and fit a prosthesis into a bone.
  • These motion capture systems rely on camera-based optical tracking that is pinned to the bone, increasing a risk of injury during the procedure and requiring camera line of sight. Any disruption or bumping of the marker or occlusion of the line of sight during a measurement can be problematic.
  • IMU-based sensor systems are known to suffer from drift, making them unsuitable for high-precision repeated measurements in computer-guided or robot assisted surgical procedures.
  • IMU-based sensor systems on the market may require wires to connect to the data collection system or have a short battery life.
  • FIG. 1 A is a block diagram of an example sensor system.
  • FIG. IB is an example physical sensor as depicted in FIG. 1 A.
  • FIG. 2 illustrates an example sensor orientation determination
  • FIG. 3 illustrates a system block diagram to communicate sensor orientation data to remote devices.
  • FIG. 4 illustrates an example sensor orientation implementation in computer- guided surgery.
  • FIG. 5 illustrates an example communication system to communicate sensor orientation data.
  • FIG. 6 illustrates gyroscope’s uncorrected errors, gyroscope noise, which causes diffusion errors in orientation, and gyroscope offset, which causes drift errors in orientation.
  • FIG. 7 illustrates gyroscope’ s uncorrected errors as shown in FIG. 6, during a warm up period in an at rest condition.
  • FIG. 8A illustrates orientation Euler angles generated by a sensor over a battery life, in an at rest condition.
  • FIG. 8B illustrates magnetometer’s distortion in orientation Euler angles caused by magnetic noise during a warm up period in an at rest condition.
  • FIG. 9 illustrates a frequency dependent beta b used in a fusion algorithm to improve responsive (less sluggish) transition of the sensor from the sensor’s dynamic state to the sensor’s semi-static and to improve sensor drift characteristics in semi-static state.
  • FIG. 10 is an example mapping of a sensor’s local frame to a World frame.
  • FIG. 11 is an example fusion algorithm flow chart for a sensor orientation measurement.
  • Systems can provide inertial measurement sensing of functional movement and range of motion analysis.
  • the system includes wireless sensors with long battery life and negligible drift, making them suitable for repeatable measurements in clinical and sports applications.
  • the sensors can be used for computer-guided surgical applications in total knee, partial knee, and total hip replacements for traditional and/or robotic procedures.
  • Other example implementations can include, but are not limited to, applications in Cerebral Palsy, Muscular Dystrophy, and stroke rehabilitation.
  • the system includes inertial measurement unit (IMU) sensors, e.g., electronic devices that measure and report a body's specific force, angular rate, and/or the magnetic field surrounding the body, using a combination of accelerometers and gyroscopes, and/or magnetometers.
  • IMU inertial measurement unit
  • An example IMU is described in commonly owned U. S. Patent Publication No. US2017/0024024A1 entitled“Systems and Methods for Three Dimensional Control of Mobile Applications,” the description of which is incorporated by reference herein in its entirely.
  • a fusion algorithm and IMU sensors can be used to derive the relative movement of limbs, without requiring any cameras to track ongoing motion.
  • the system can be used inside and/or outside the operating room, e.g., before the operation to determine mobility, during the operation to aid with the surgery, and after the operation to aid with physical therapy.
  • arthroplasty involves measurement of angles in three dimensions and reporting of static limb positions.
  • the system’s sensors can improve robotic knee surgery, by replacing an otherwise cumbersome optical tracking system with an inertial movement based tracking system using the sensors.
  • the sensors are single use.
  • the sensors for this application can be disposable components used as part of the surgery and billable as disposable medical equipment (DME).
  • the sensors can be integrated into a software of a robot, which integrates motion tracking data with patient CT scans and other clinical information so that the robot may move around the patient’s knee.
  • the sensors may simplify surgical workflow, reduce risk of injury, and reduce the training costs of adopting robotic surgery.
  • Surgical robots assist the surgeon in making the cut into the bone at a proper angle, reducing variability in results due to human error, and ultimately improving outcomes.
  • the robotic field includes Stryker’s Mako robot.
  • traditional (non-robotic) knee surgery the field includes products that enable physicians to objectively assess knee joint function before and after the procedure, and for fitting the implant during surgery.
  • the application of the IMU sensors may help reduce malpractice risk, improve patient outcomes, and make traditional knee surgery more competitive with robotic technology at a fraction of the cost.
  • the system includes communicating sensor data, which may be used to ensure that patients properly do their exercises at home, and may enable physicians to remotely monitor patient compliance. Poor compliance with exercise regimens can be a significant contributor to sub-optimal outcomes following surgery.
  • FIG. 1A is a block diagram of an example sensor system 100.
  • the sensor system
  • the 100 includes an IMU 110, a microcontroller unit (MCU) 120, and a transceiver radio 106.
  • the IMU 110 may include a three-axis accelerometer 112, a three-axis gyroscope 114, and a three-axis magnetometer 116.
  • the MCU 120 includes a processor circuit that includes at least an on-chip timer 122, samples data periodically from the IMU 110, reads factory pre-installed calibration parameter 126 from a persistent memory, and executes a fusion algorithm 124 to process orientation data of the sensor 102.
  • a protocol handler 128 formats processed orientation data from the MCU 120 to be transmitted wirelessly by an on-chip radio 106 through an antenna 108.
  • the IMU 110 may be physically separated from the MCU 120 by a certain distance.
  • the IMU 110 and the MCU 120 may be integrated within a single package as shown in FIG. IB.
  • FIG. IB is an example physical sensor as depicted in FIG. 1 A.
  • an actual size of the sensor 102 may be measured with a dimension of no more than 1.1 inch width by 1.1 inch length, and weighing no more than six grams.
  • the sensor 102 is small enough to be attached externally onto a body part, or implanted within a body part in a surgical procedure to monitor alignment of a prosthetic apparatus, or motion trajectory of the body part after an alignment procedure.
  • the sensor 102 may include an onboard rechargeable or non-rechargeable Li-ion battery 105 having a continuous operating lifetime of no less than seven hours to power the IMU 110, MCU 120, and the onboard radio 106.
  • Floating-point computation is run in a software environment, which is extremely inefficient (slow).
  • the MCU 120 used in the disclosed sensor 102 is, in one example, so low powered that it does not even have a hardware Floating Point Processor.
  • the big challenge of using such a small MCU is the need to run a demanding fusion algorithm on it.
  • the challenge may be tacked by implementing the fusion algorithm in fixed point arithmetic, by using Texas Instruments IQmath fixed point arithmetic library.
  • this library has an additional advantage that it is more efficient (faster) than performing floating point operations via a hardware Floating Point Processor.
  • the radio 106 may include a patch antenna 108 on a printed wire board (PWB).
  • PWB printed wire board
  • an arrow mark 107 may be printed onto the sensor 102 to indicate a reference Y’ axis direction within a local frame reference of the IMU 110 having coordinate axes of X’, Y’, and Z’.
  • the local frame in the IMU 110 forms a right handed X’, Y’, and Z’ coordinate system with respect to a common origin O’, and directions of X’ and Y’ axes are pre-oriented on the IMU as shown by the arrow 107 in FIG. 2.
  • Each sensor may be identified by a unique serial number printed thereon during manufacturing.
  • a mechanical switch 109 may be used to turn on the power of the sensor when used and a light emitting diode (LED) 103 may be used to indicate an on/off power state of the sensor 102.
  • LED light emitting diode
  • the IMU 110 and the MCU 120 may operate as a pair.
  • the IMU 110 and the MCU 120 may operate as a pair.
  • the IMU 120 and the MCU 120 may operate as a pair.
  • IMU 110 may be physically separated from the MCU 120 by a certain distance and connected by physical wires.
  • the IMU 110 and the MCU 120 may be integrated within a single package as shown in FIG. IB.
  • respective IMU raw data a, b, c may be generated in digital format by the three-axis accelerometer 112, a three-axis gyroscope 114, and a three-axis magnetometer 116, respectively, and sent to the MCU 120 to generate corresponding orientation data samples.
  • the MCU 120 may first sample the IMU raw data a, b, c within a periodic time interval At.
  • the sampled raw data a, b, c may afterwards be corrected by the MCU 120 by the IMU 110 specific calibration parameter 126 (factory pre-installed as firmware) read from a persistent memory, to correct the sampled raw data a, b, c to stay within a pre-defmed range of accuracy and/or drift variations.
  • the MCU 120 may then execute a fusion algorithm 124 in fixed point computation (versus CPU floating point computation) to transform the corrected sampled raw data a, b, c into corresponding plurality of IMU orientation data samples, through rotating the local frame coordinate axes of X’, Y’, and Z’ of each corrected plurality of IMU data samples a’, b’, c’ by an amount until the local frame of the IMU 110 is aligned with or matched to a world frame (see FIG. 2).
  • a fusion algorithm 124 in fixed point computation (versus CPU floating point computation) to transform the corrected sampled raw data a, b, c into corresponding plurality of IMU orientation data samples, through rotating the local frame coordinate axes of X’, Y’, and Z’ of each corrected plurality of IMU data samples a’, b’, c’ by an amount until the local frame of the IMU 110 is aligned with or matched to a world frame (see FIG. 2).
  • the world frame forms a right-handed X, Y, and Z coordinate system with respect to a common origin O, with an x-axis that points in a magnetic North direction, a z-axis that points upward with respect to the earth ground, and a y- axis that is perpendicular to an XZ plane.
  • the transformed corresponding plurality of sensor's orientation data samples q may then be transmitted by the radio 106 to a remote station 140.
  • the MCU 120 in sensor 102 may calculate this orientation (quaternion) periodically (using Madgwick’s fusion algorithm modified by a frequency dependent b) at a sampling rate atl90Hz and emit this information in wireless packets at a transmission rate such as at 120 Hz through the radio 106 using an antenna ANT protocol without using Blue Tooth Low Energy (BLE) protocol for wireless communication.
  • FIG. 3 illustrates a system block diagram 300 to communicate sensor orientation data to remote devices.
  • the remote station 140 may be a transceiver station including a radio 142 that receives the sensor’s orientation data samples q (in quaternions).
  • the remote station 140 may re-format the sensor’s orientation data samples q in a wireless protocol format to a Universal Serial Bus (USB) format industry standard that may readily be powered by a power supply and communicated to another user device 150 such as a computer, or a peripheral device through USB cable connections.
  • the user device 150 may function as a WebSocket server 152 that directly streams messages of the sensor’s orientation data samples q on a web browser through a WebSocket protocol.
  • the Websocket Server 152 and User Application 154 need not be located on the same device.
  • the User Application 154 may reside on another device, such as another PC/Mac/desktop or a mobile device (iPad, iPhone, Android phone/tablet) and communicate with the Websocket Server 152 via Websockets.
  • FIG. 4 illustrates an example sensor orientation implementation in computer- guided surgery.
  • a sensor system 400 may include a plurality of sensors 402, 404.
  • Orientation data sent out by sensors 402, 404 to a monitor 450 may enable a physician 460 to custom fit a knee implant 406 to a femur bone 410, and custom fit a knee plate 408 to a tibia 414 of a patient in the operating room.
  • the orientation data sent out by sensors 402, 404 improves precisions in an alignment and angular movement of the patient’s knee during the procedure and after the procedure in the clinic, without using an expensive surgical robot.
  • the system 400 may improve on surgical robot surgeries, e.g., by providing small, lightweight, and/or wireless sensors, with long battery life that do not rely on line of sight to operate (e.g., see FIGs. 4-5).
  • the sensors 402, 404 may be attached to the bones 410, 414 and/or implant of the user/patient and/or be wearable by the user/patient.
  • Wireless communication may include sending the orientation data to the monitor 450.
  • FIG. 5 illustrates an example communication system 500 to communicate the sensors’ 502-510 motion and/or orientation data.
  • orientation and motion information 516 from sensors 502-510 may be wirelessly transmitted to a remote station 540, which may be re-formatted into USB serial data 516 to be re-transmitted to a mobile device 550 of the user/patient for processing and/or storage.
  • the mobile device 550 of the user may utilize a WebSocket protocol to process the received orientation data 516 for viewing on a browser, and may forward the data 520 in WebSocket protocol to remote processing and/or data storage servers, e.g., the cloud 570.
  • data from the cloud 570 may be accessed by other entities, e.g., a device of a doctor and/or physical therapist, e.g., via a portal, application, or application programming interface (API).
  • the device 580 of the doctor and/or physical therapist can be used to send additional information to the mobile device 550 of the user/patient, e.g., for viewing by the user/patient 540.
  • the mobile device 550 may send instant visual and/or audio feedback 518 to the user/patient based on the sensors’ transmitted orientation data.
  • data 516 from the sensors 502-510 may integrate with electronic medical records systems and with other application-specific software that assists the clinician to customize knee implants.
  • cloud-based software can be used to remotely monitor patient compliance.
  • a software as a service (SaaS) model for example, may allow clinics to receive ongoing services and amortize the cost of the hardware over time.
  • SaaS software as a service
  • users, doctor, and physical therapists are insulated from the intricacies of serial communications and managing raw data packets originating from the remote station 540.
  • the sensors 502-510 wirelessly transmit orientation data in the form of quaternions to a remote station 540.
  • the remote station 540 may be plugged into a desktop as a server via USB communication.
  • the IMU 110 used in the sensor 102 may be a low-cost MEMS
  • 9-axis IMU made up of a 3-axis magnetometer 116 that measures the X’, Y’, and Z’ coordinates according to a magnetic field’s direction vector (North pole) of the IMU in the local frame, a 3-axis accelerometer 112 that measures the X’, Y’, and Z’ coordinates according to an acceleration of the IMU in the local frame, and a 3-axis gyroscope 114 that measures the X’, Y’, and Z’ coordinates of an angular velocity of the IMU in the local frame.
  • the 9-axis IMU is sometimes also known as a MAG (Magnetic, Angular Rate and Gravity) sensor.
  • the 3-axis magnetometer 116 in an absence of any magnetic or ferromagnetic materials in its environment, the 3-axis magnetometer 116 would measure the three components (in X’, Y’, and Z’ in local frame) of the Earth’s magnetic field’s vector. It is worth noting that this vector is not parallel to the ground. Rather, it has an inclination (for example, -64° in Columbus, Ohio) with respect to the ground having the X axis pointing to the north pole, the Z axis pointing up, and the Y axis pointing out of the page (West direction) with reference to the world frame. In practice, an improved steepest decent method modified from Madgwick’s fusion algorithm will be used to immune from the effects of the inclination angle.
  • the environment in which the IMU 110 is placed may have magnetic and ferromagnetic materials which will modify the Earth’s magnetic field at the location of the IMU 110. Therefore, the measured North pole direction will not coincide with the Earth’s North pole direction. This will not pose any problem for the sensors if two conditions are met: (i) the resultant magnetic field is uniform (the strength of the magnetic field has no detrimental effect on the sensors as long as it is uniform), (ii) all the sensors being used together are in the same magnetic environment, and therefore they all see the same magnetic field (i.e. the same World frame).
  • the accelerometer 112 measures the three components (in X’, Y’, and Z’ in local frame) of the linear acceleration experienced by the IMU 110.
  • the accelerometer may be used to indicate an UP direction, i.e., the Z axis of the world frame.
  • the accelerometer When the accelerometer is at rest (i.e., static to semi-static condition), it only experiences an acceleration due to gravity which points downward, and the calibration parameter 126 to correct an UP direction may be determined by taking a negative of the measured acceleration.
  • FIG. 6 illustrates gyroscope’s uncorrected errors, noise which causes diffusion in sensor’s orientation and offset which causes drift in sensor’s orientation, at initial power up in an at rest condition.
  • off-the-shelf accelerometers 112 suffer from offset and scale errors. These errors are reduced via the one-time factory calibration parameters 126; the user need not be concerned about these errors.
  • a three axis gyroscope 114 may measure the three components (X’, Y’, and Z’ in local frame) of an angular velocity w of the IMU 110.
  • Gyroscope 114 may also suffer from fixed offset errors in the measured values of angular velocities. As shown, this offset 604 may lead to drift errors in orientation, errors which increase with time linearly. The gyroscope 114 should ideally measure the angular velocity components to be zero. However, the measured results of IMU 110 may show a constant offset error of -3 degrees.
  • the error readings of noise which causes diffusion in sensor’s orientation 606 and offset which causes drift in sensor’s orientation is individually measured at the factory which is specific to each IMU, which may be stored as calibrate parameter 126 in a persistent memory of the MCU to correct the orientation data to exhibit close to zero noise and close to zero offset as shown in line 604 over a time period.
  • FIG. 7 illustrates gyroscope’ s uncorrected errors as shown in FIG. 6, during a warm up period in an at rest condition.
  • the gyroscope (114) may need a warm up time of the order of 10 to 15 minutes before its cold offset value 702 settles to a warm offset value 704 to steady state values.
  • FIG. 7 shows the two different values (701 and 702) of offset that would be obtained if the offset were calculated when the sensor was cold (-3.2 degrees in lower circle-dashed line 702) and when the sensor was warmed up for 15 minutes (-3.0 degrees in upper asterisk-dashed line 701).
  • FIG. 8 A illustrates Euler angles q c , q g , and q z obtained from the sensor’s orientation quaternion q over a battery life, in an at rest condition.
  • the sensor 102 is fastened to a fixed position at least two feet away from any ferromagnetic and magnetic materials.
  • the drift test was started after a 15-minute warmup and the orientation quaternions being periodically read and having corrected with the calibration firmware 126 stored in the persistent memory.
  • the calculated Euler angles q c , 0 y , and q z remained constant except for noise fluctuations after over 7.5 hours of continuous operations until the battery starts to deplete. It is shown that a total drift of less than 0.25 degrees in all the three Euler angles q c , 0 y , and 0 Z , this translates to less than 0.035 degrees/hour of drift.
  • FIG. 8B illustrates Euler angles q c , q g , and q z obtained from the sensor’s orientation quaternion in the presence of ferromagnetic interference to demonstrate magnetometer’s distortion caused by magnetic noise after a warm up period, in an at rest condition.
  • Graphs 810 and 830 shows an improvement in magnetic noise in y Euler data and x Euler data after compensation.
  • Graph 820 exhibits jumps in z Euler data due to magnetic noise caused by the presence of or within two feet of a ferromagnetic material.
  • the magnetometer’ s 116 in the sensor 102 should be kept away from ferromagnetic and magnetic materials at a predefined minimum distance of at least two feet.
  • FIG. 9 illustrates using a frequency dependent beta b for drift and diffusion corrections to calibrate a sensor 102 under semi-static state and dynamic state.
  • the frequency dependent b may include using either a bi or a b2 depending on whether or not a sensor’ s angular frequency w is greater than a cross-over angular frequency u> c as shown in FIG. 9, wherein bi > b2. More specifically, when sensor’s w ⁇ a> c in a semi-static state (i.e., stationary to slow motion), bi may be used to compensate for gyroscope drift and diffusion contributions.
  • the sensor 102 In the semi-static state, the sensor 102 (having the magnetometer 116) should be kept at a predefined minimum distance away from ferromagnetic and magnetic materials.
  • b2 When sensor’s w > w e in a dynamic motion state, b2 may be used to compensate both magnetometer noise and accelerometer noise contribution in the sensor 102.
  • the sensor 102 would need to be kept at a predefined distance (e.g., at least two feet away) away from magnetic and ferromagnetic materials to avoid magnetic noise.
  • the smaller b may make the sensor motion appear smooth as they are in motion.
  • the smoothness may be because of less accelerometer noise, and because of less overall noise in the orientation data.
  • the orientation quaternion might move away from a minimum of the objective function where the correct solution for the orientation may be when the orientation quaternion is at a minimum of an obj ective function /to be explained later. In a case of using a single-/?
  • the sensor 102 be more responsive and more agile in orientation calculation, making the orientation readings snappier and smoother to deal with the exceptional semi static correction of gyro divergent errors due to diffusion and drift.
  • the same two b values may be used for all the sensors 102, and the two b values in the calibration parameters are not unique or specific to a given sensor.
  • FIG. 10 is an example mapping of the sensor’s frame (local frame) to a World frame.
  • FIG. 11 is an example fusion algorithm flow chart to measure orientation in a sensor 102
  • the MCU 120 of the sensor 102 may modify Madgwick’s fusion algorithm using a frequency dependent b to calculate the orientation quaternion in fixed point.
  • Madgwick’s terminology and symbols may be adopted to shown the orientation calculations.
  • a World Frame with the letter E (Earth) and a Sensor’s local frame by the symbol S may be adopted.
  • the Earth’s Frame is fixed to the earth and has coordinates, E x, E y, E z.
  • the orientation of the sensor 102 is defined as the quaternion fq which is a rotation that aligns the Sensor’s frame with the Earth’s frame.
  • the hat L symbol is used to denote a unit quaternion.
  • the job of the fusion algorithm is to periodically calculate the orientation, f q , after each time step, At.
  • the value of At is dictated by a sampling rate at which raw data can be read from the IMU’s sensors (accelerometer 112, gyroscope 114, and magnetometer 116).
  • the orientation, fq can be calculated in two alternate ways: (i) by using the gyroscope 114 raw data alone, or, (ii) by using the raw data obtained from the magnetometer 116, accelerometer 112 (M, A) pair.
  • the orientation, f q calculation may be obtained from steps (i) or (ii) by modifying Madgwick’s fusion algorithm using a frequency dependent b value, which can be summarized by the flow chart of FIG. 11 as follows:
  • Step 1102 starting from a fully calibrated sensor by reading the calibration parameters 126 from persistent memory.
  • Steps 1104-1106 calculate the initial orientation using the (M, A) pair, then perform the following operations periodically after each time step (or time interval) At. Start sampling raw data from the magnetometer 116, accelerometer 112 and gyroscope 114 at each time interval At, correct the raw data using the calibration parameters.
  • Step 1106 calculate the change in orientation by using the gyroscope raw data only. Step 1108, wait for the next sampled raw data within the time interval At. The above steps 1106-1108 will lead to small diffusion errors (explained earlier) in orientation due to gyro noise, and some drift due to gyro offset even in a calibrated gyroscope 114.
  • Step 1110-1112 correct the orientation for these gyro errors by calculating the change in orientation by using the (M, A) pair.
  • the amount of this (M, A) change is weighted by at least two parameters, ?s (explained earlier).
  • the errors introduced by using the gyroscope 114 raw data alone from step (i) are small in the time step, At, so a small value of b ⁇ 1 may be enough to correct for the gyro errors. Keeping the value of b small has the advantage of reducing the dependence of orientation on accelerometer and magnetic noise.
  • steps 1108-1112 may be explained in more detail in the following sections.
  • a gyroscope 114 generates three components of the angular velocity, w c , w y , and w z .
  • the corresponding angular acceleration quaternion, 5 w may be generated as blow equation (2): o)— [0, ⁇ c , o y , w z ] (2) from which a rate of change of orientation can be obtained with respect to time (O indicates quaternion multiplication) as shown in equation (3).
  • the subscript w indicates that (in this section) the orientation may be obtained from gyro data only, and the subscript t indicates time.
  • q W t To evaluate the orientation,
  • 5 ⁇ w t is sampled periodically at times t, t + At, t + 2 At, ... (henceforth, t, t + 1, t + 2, ... as short-hand for t, t + At, t + 2 At, ).
  • the integration is performed by estimating the orientation, f q est,w,t , at time t by using the previously estimated value of the orientation, 3 ⁇ 4 est w t — i ’ at time t — 1 ⁇
  • new terms are added to older terms and a sequence of sums is built up to perform the integration numerically through equations (4)-(5):
  • Diffusion errors For simplicity, assume that the gyroscope data has noise but no offset. If the sensor is at rest (i.e., semi-static or static state), then ideally the orientation will not change with time since 5 u> t would be zero at all times. However, noise in the gyroscope will make the orientation move randomly and diverge away from the initial position in a random manner. This random motion is like diffusion (or random walk) present in many physical phenomena.
  • noise in the gyroscope may cause the orientation of the gyroscope 114 to diffuse and diverge away from its starting value, even if the gyroscope 114 is at rest. Therefore, gyro noise will cause the orientation to diverge away from the true orientation of the sensor.
  • the (M, A) pair and an appropriate value of b will be used in the fusion algorithm to correct for diffusion which is caused by noise in the gyroscope 114.
  • the gyroscope 114 does not know the true orientation, it only knows the change of orientation with respect to time; therefore a gyroscope 114 has no way of correcting for diffusion on its own, it needs the help of the (M, A) pair to correct for diffusion. Diffusion (caused by noise in the gyroscope 114) is also known as divergent because it takes the orientation of the sensor 102 away from the correct value.
  • Drift error If the gyroscope data has a constant offset error then the integration discussed above results in an error in orientation,
  • J Q c dt ct. (6)
  • This drift error may be corrected by calibrating the gyroscope to remove the offset from the orientation data as described above. If the drift is small, then a small value of b can be used to handle it. In general, using large values of b to remove drift are best avoided since that makes the orientation more susceptible to accelerometer and magnetometer noise. Note that even though the sensor is motionless (semi-static or static state), the calculated orientation is increasing linearly with time. Drift makes stationary objects appear to rotate at a constant angular velocity, equal to a slope of a drift line.
  • e indicates that data from the (M, A) pair was used to determine the rate of change of orientation.
  • / is a function of the orientation quaternion, fq e t , known as the objective function, Vf is the gradient of /, and
  • the objective function / is a fairly complicated function explained fully in Madgwick’s paper, where it is shown that minimizing this function with respect to
  • Vf has a direction which points in the direction of the minimum of the objective function. Because orientation is being calculated by using two sensors, (M, A) pair, Vf is composed of two directions. One direction is caused by changes in magnetometer data, and the other caused by changes in accelerometer data. To reach the minimum of / in the most optimal way, these two directions should be close to being orthogonal to each other, so that the minimum can be found (by the steepest descent method) in the least number of steps. [0071] It turns out that in Madgwick’s original paper, the way V/ is defined, these two directions are not orthogonal, which can lead to slow dynamic response, i.e., the system can take some time to reach the minimum of /. Madgwick’s non-orthogonal method is called the “original” steepest descent method.
  • the modification to the fusion algorithm includes fusing gyroscope and (M, A) Pair
  • the fusion of the gyro and (M, A) data is obtained by combining the rate of change of orientation obtained from the gyroscope, f q w t , eq. (4), with the rate of change of orientation obtained from the (M, A) pair, f q e t , eq. (7) as follows:
  • This equation (8) is a numerical integration performed in the main loop of the fusion algorithm.
  • the frequency dependent b is an adjustable parameter which glues the two results, i.e., gyroscope’s orientation data and orientation obtained from the (M, A) pair together.
  • the fusion algorithm may calculate the initial condition (see steps 1102, 1104 in FIG. 11) for the integration in eq. (8) by using the multiple step steepest descent method (using M, A pair data), until convergence is obtained. Then periodically calculate the change in orientation by using (8).
  • q W t At in eq (8) will lead to diffusion errors due to gyro noise, and may also contain a small amount of drift due to gyro offset, even in a calibrated gyroscope (gyro calibration is discussed later); where both of these errors are divergent as explained earlier.
  • a large value of b also has the disadvantage of increasing the noise in the calculated orientation. This can be explained as follows: After the initial full steepest descent calculation, the system comes to the minimum of the objective function. For the rest of the integration loop, the answer remains near this minimum. The divergent terms derive the system away from the minimum, while the convergent terms bring it in the direction of the minimum. However, the convergent team is scaled by /?, and depending on the value of b the system will overshoot the minimum by some amount. Thus, the calculated orientation will“rattle around” the minimum with the passage of time. This“rattling around” because of overshoot will appear as noise in the calculated orientation data. Therefore, larger values of b lead to an increased noise in orientation.
  • the improvement in drift performance on low grade IMUs can be achieved through performing in the following sequence as shown in FIG. 11 : warming up the IMU 102, calibrating the gyroscope raw data using factory pre-installed IMU specific calibration data, afterwards executing a fast converging fusion algorithm to calculate orientation information from the raw data sampled from the IMU 110. It is shown that Madgwick’s fusion algorithm which is used to calculate orientation in quaternions from the corrected raw data of the IMU, may be improved to respond with more agility and to converge the orientation information faster to minimum drift errors.
  • the improved fusion algorithm uses a frequency dependent b to glue the gyroscope’s calibrated raw data and the (M, A) pair calibrated raw data together in orthogonal directions to perform a steepest descent method to a fast convergence to a minimum solution.
  • the disclosed sensor 102 is able to achieve a long battery life through the use of an ultra-low power MCU 120 (e.g., MSP430 microcontroller by Texas Instruments of Dallas, Texas, U.S.A.) by carrying out fixed point calculations on the fast convergence fusion algorithm.
  • an ultra-low power MCU 120 e.g., MSP430 microcontroller by Texas Instruments of Dallas, Texas, U.S.A.
  • the sensor 102 formed by a pair lower grade IMU 110 and ultra-low power MCU

Abstract

The embodiments described herein relate to systems, methods, and devices for high precision inertial measurement sensing of functional movement and range of motion analysis with close to zero drifts in sensor orientation readings. The system calibrates IMU raw data samples after warm up, and uses a fast convergent fusion algorithm to calculate high accuracy and almost drift free orientation information. In some examples, the systems, methods, and devices are used in computer-guided or robotic surgery, to aid in evaluation before, during, and after a surgical operation.

Description

SYSTEM AND METHOD FOR MOTION BASED ALIGNMENT
OF BODY PARTS
INVENTORS:
Furrukh Khan
Xiaoxi Zhao
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This Application claims the benefit of U.S. Provisional Patent Application Serial
No. 62/779,047, titled“System And Method For Motion Based Alignment Of Body Parts,” filed on December 13, 2018, and incorporates by reference in its entirety commonly owned U.S. Patent Application Serial No. 15/200,880, titled“Systems And Methods For Three Dimensional Control Of Mobile Applications,” filed on July 1, 2016, which claims the benefits of U.S. Provisional Patent Application No. 62/187,426, filed on July 1, 2015, and claims the benefit of U.S. Provisional Patent Application No. 62/221,170, filed on September 21, 2015, all of which are incorporated by reference herein in their entirety.
BACKGROUND
[0002] Motion capture devices frequently require the use of body suits or harnesses, which can diminish user experience. Moreover, even with the complexity of such motion capture devices, the motion capture devices frequently generate excessive amounts of sensor drift, i.e., the sensor erroneously detects motion when the user is stationary.
[0003] Motion capture devices may enable a surgical robot to assist a physician to properly align a cut and fit a prosthesis into a bone. These motion capture systems rely on camera-based optical tracking that is pinned to the bone, increasing a risk of injury during the procedure and requiring camera line of sight. Any disruption or bumping of the marker or occlusion of the line of sight during a measurement can be problematic.
[0004] Inertial measurement unit (IMU)-based sensor systems are known to suffer from drift, making them unsuitable for high-precision repeated measurements in computer-guided or robot assisted surgical procedures. In addition, IMU-based sensor systems on the market may require wires to connect to the data collection system or have a short battery life.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] The embodiments set forth in the drawings are illustrative in nature, and are not intended to limit the subject matter defined by the claims. The following detailed description of the illustrative embodiments can be understood when read in conjunction with the following drawings, where like structure is indicated with like reference numerals, and in which:
[0006] FIG. 1 A is a block diagram of an example sensor system.
[0007] FIG. IB is an example physical sensor as depicted in FIG. 1 A.
[0008] FIG. 2 illustrates an example sensor orientation determination.
[0009] FIG. 3 illustrates a system block diagram to communicate sensor orientation data to remote devices.
[0010] FIG. 4 illustrates an example sensor orientation implementation in computer- guided surgery.
[0011] FIG. 5 illustrates an example communication system to communicate sensor orientation data.
[0012] FIG. 6 illustrates gyroscope’s uncorrected errors, gyroscope noise, which causes diffusion errors in orientation, and gyroscope offset, which causes drift errors in orientation.
[0013] FIG. 7 illustrates gyroscope’ s uncorrected errors as shown in FIG. 6, during a warm up period in an at rest condition.
[0014] FIG. 8A illustrates orientation Euler angles generated by a sensor over a battery life, in an at rest condition.
[0015] FIG. 8B illustrates magnetometer’s distortion in orientation Euler angles caused by magnetic noise during a warm up period in an at rest condition.
[0016] FIG. 9 illustrates a frequency dependent beta b used in a fusion algorithm to improve responsive (less sluggish) transition of the sensor from the sensor’s dynamic state to the sensor’s semi-static and to improve sensor drift characteristics in semi-static state. [0017] FIG. 10 is an example mapping of a sensor’s local frame to a World frame.
[0018] FIG. 11 is an example fusion algorithm flow chart for a sensor orientation measurement.
SUMMARY
[0019] Systems can provide inertial measurement sensing of functional movement and range of motion analysis. In some examples, the system includes wireless sensors with long battery life and negligible drift, making them suitable for repeatable measurements in clinical and sports applications. In some examples, the sensors can be used for computer-guided surgical applications in total knee, partial knee, and total hip replacements for traditional and/or robotic procedures. Other example implementations can include, but are not limited to, applications in Cerebral Palsy, Muscular Dystrophy, and stroke rehabilitation.
[0020] In some examples, the system includes inertial measurement unit (IMU) sensors, e.g., electronic devices that measure and report a body's specific force, angular rate, and/or the magnetic field surrounding the body, using a combination of accelerometers and gyroscopes, and/or magnetometers. An example IMU is described in commonly owned U. S. Patent Publication No. US2017/0024024A1 entitled“Systems and Methods for Three Dimensional Control of Mobile Applications,” the description of which is incorporated by reference herein in its entirely. For use in motion capture, a fusion algorithm and IMU sensors can be used to derive the relative movement of limbs, without requiring any cameras to track ongoing motion.
[0021] In the field of arthroplasty (knee surgery), the system can be used inside and/or outside the operating room, e.g., before the operation to determine mobility, during the operation to aid with the surgery, and after the operation to aid with physical therapy. For example, arthroplasty involves measurement of angles in three dimensions and reporting of static limb positions. In some examples, the system’s sensors can improve robotic knee surgery, by replacing an otherwise cumbersome optical tracking system with an inertial movement based tracking system using the sensors. In some examples, the sensors are single use. For example, to avoid having to clean and sterilize the sensors between procedures and to avoid problems caused by battery charging, the sensors for this application can be disposable components used as part of the surgery and billable as disposable medical equipment (DME). [0022] The sensors can be integrated into a software of a robot, which integrates motion tracking data with patient CT scans and other clinical information so that the robot may move around the patient’s knee. The sensors may simplify surgical workflow, reduce risk of injury, and reduce the training costs of adopting robotic surgery. Surgical robots assist the surgeon in making the cut into the bone at a proper angle, reducing variability in results due to human error, and ultimately improving outcomes. The robotic field includes Stryker’s Mako robot. In traditional (non-robotic) knee surgery, the field includes products that enable physicians to objectively assess knee joint function before and after the procedure, and for fitting the implant during surgery. The application of the IMU sensors may help reduce malpractice risk, improve patient outcomes, and make traditional knee surgery more competitive with robotic technology at a fraction of the cost.
[0023] For the rehabilitation market, the system includes communicating sensor data, which may be used to ensure that patients properly do their exercises at home, and may enable physicians to remotely monitor patient compliance. Poor compliance with exercise regimens can be a significant contributor to sub-optimal outcomes following surgery.
[0024] While particular embodiments have been illustrated and described herein, various other changes and modifications may be made without departing from the spirit and scope of the claimed subject matter. Moreover, although various aspects of the claimed subject matter have been described herein, such aspects need not be utilized in combination. The appended claims are intended to cover all such changes and modifications that are within the scope of the claimed subject matter.
DETAILED DESCRIPTION
[0025] FIG. 1A is a block diagram of an example sensor system 100. The sensor system
100 includes an IMU 110, a microcontroller unit (MCU) 120, and a transceiver radio 106. The IMU 110 may include a three-axis accelerometer 112, a three-axis gyroscope 114, and a three-axis magnetometer 116. The MCU 120 includes a processor circuit that includes at least an on-chip timer 122, samples data periodically from the IMU 110, reads factory pre-installed calibration parameter 126 from a persistent memory, and executes a fusion algorithm 124 to process orientation data of the sensor 102. A protocol handler 128 formats processed orientation data from the MCU 120 to be transmitted wirelessly by an on-chip radio 106 through an antenna 108. In an example, the IMU 110 may be physically separated from the MCU 120 by a certain distance. In another example, the IMU 110 and the MCU 120 may be integrated within a single package as shown in FIG. IB.
[0026] FIG. IB is an example physical sensor as depicted in FIG. 1 A. In an example, an actual size of the sensor 102 may be measured with a dimension of no more than 1.1 inch width by 1.1 inch length, and weighing no more than six grams. The sensor 102 is small enough to be attached externally onto a body part, or implanted within a body part in a surgical procedure to monitor alignment of a prosthetic apparatus, or motion trajectory of the body part after an alignment procedure.
[0027] The sensor 102 may include an onboard rechargeable or non-rechargeable Li-ion battery 105 having a continuous operating lifetime of no less than seven hours to power the IMU 110, MCU 120, and the onboard radio 106. Floating-point computation is run in a software environment, which is extremely inefficient (slow). To increase battery life, the MCU 120 used in the disclosed sensor 102 is, in one example, so low powered that it does not even have a hardware Floating Point Processor. The big challenge of using such a small MCU is the need to run a demanding fusion algorithm on it. In this example, the challenge may be tacked by implementing the fusion algorithm in fixed point arithmetic, by using Texas Instruments IQmath fixed point arithmetic library. Using this library has an additional advantage that it is more efficient (faster) than performing floating point operations via a hardware Floating Point Processor.
[0028] The radio 106 may include a patch antenna 108 on a printed wire board (PWB). As shown in FIG. IB, an arrow mark 107 may be printed onto the sensor 102 to indicate a reference Y’ axis direction within a local frame reference of the IMU 110 having coordinate axes of X’, Y’, and Z’. The local frame in the IMU 110 forms a right handed X’, Y’, and Z’ coordinate system with respect to a common origin O’, and directions of X’ and Y’ axes are pre-oriented on the IMU as shown by the arrow 107 in FIG. 2. Each sensor may be identified by a unique serial number printed thereon during manufacturing. A mechanical switch 109 may be used to turn on the power of the sensor when used and a light emitting diode (LED) 103 may be used to indicate an on/off power state of the sensor 102.
[0029] The IMU 110 and the MCU 120 may operate as a pair. In an example, the IMU
110 may be physically separated from the MCU 120 by a certain distance and connected by physical wires. In another example, the IMU 110 and the MCU 120 may be integrated within a single package as shown in FIG. IB. In operation, respective IMU raw data a, b, c may be generated in digital format by the three-axis accelerometer 112, a three-axis gyroscope 114, and a three-axis magnetometer 116, respectively, and sent to the MCU 120 to generate corresponding orientation data samples. The MCU 120 may first sample the IMU raw data a, b, c within a periodic time interval At. The sampled raw data a, b, c may afterwards be corrected by the MCU 120 by the IMU 110 specific calibration parameter 126 (factory pre-installed as firmware) read from a persistent memory, to correct the sampled raw data a, b, c to stay within a pre-defmed range of accuracy and/or drift variations.
[0030] After calibration and raw data corrections, the MCU 120 may then execute a fusion algorithm 124 in fixed point computation (versus CPU floating point computation) to transform the corrected sampled raw data a, b, c into corresponding plurality of IMU orientation data samples, through rotating the local frame coordinate axes of X’, Y’, and Z’ of each corrected plurality of IMU data samples a’, b’, c’ by an amount until the local frame of the IMU 110 is aligned with or matched to a world frame (see FIG. 2). The world frame forms a right-handed X, Y, and Z coordinate system with respect to a common origin O, with an x-axis that points in a magnetic North direction, a z-axis that points upward with respect to the earth ground, and a y- axis that is perpendicular to an XZ plane. The transformed corresponding plurality of sensor's orientation data samples q may then be transmitted by the radio 106 to a remote station 140.
[0031] Since the alignment of the local frame to the world frame is an orientation rotation, mathematically such orientation may be represented by a rotation matrix. Euler angles may be derived from this matrix. In practice, dealing directly with Euler angles may lead to mathematical singularities related to gimbal lock. Rotations can also be represented by mathematical entities known as quaternions, which do not suffer from gimbal lock singularities. Strictly speaking, quaternions are a four-dimensional generalization of two-dimensional complex numbers. Quaternions may be understood as vectors or arrays having four real elements that can represent rotations (orientation). The MCU 120 in sensor 102 may calculate this orientation (quaternion) periodically (using Madgwick’s fusion algorithm modified by a frequency dependent b) at a sampling rate atl90Hz and emit this information in wireless packets at a transmission rate such as at 120 Hz through the radio 106 using an antenna ANT protocol without using Blue Tooth Low Energy (BLE) protocol for wireless communication. [0032] FIG. 3 illustrates a system block diagram 300 to communicate sensor orientation data to remote devices. In an implementation, the remote station 140 may be a transceiver station including a radio 142 that receives the sensor’s orientation data samples q (in quaternions). The remote station 140 may re-format the sensor’s orientation data samples q in a wireless protocol format to a Universal Serial Bus (USB) format industry standard that may readily be powered by a power supply and communicated to another user device 150 such as a computer, or a peripheral device through USB cable connections. The user device 150 may function as a WebSocket server 152 that directly streams messages of the sensor’s orientation data samples q on a web browser through a WebSocket protocol. The Websocket Server 152 and User Application 154 need not be located on the same device. For example, the User Application 154 may reside on another device, such as another PC/Mac/desktop or a mobile device (iPad, iPhone, Android phone/tablet) and communicate with the Websocket Server 152 via Websockets.
[0033] FIG. 4 illustrates an example sensor orientation implementation in computer- guided surgery. In this example, a sensor system 400 may include a plurality of sensors 402, 404. Orientation data sent out by sensors 402, 404 to a monitor 450 may enable a physician 460 to custom fit a knee implant 406 to a femur bone 410, and custom fit a knee plate 408 to a tibia 414 of a patient in the operating room. The orientation data sent out by sensors 402, 404 improves precisions in an alignment and angular movement of the patient’s knee during the procedure and after the procedure in the clinic, without using an expensive surgical robot. In other examples, the system 400 may improve on surgical robot surgeries, e.g., by providing small, lightweight, and/or wireless sensors, with long battery life that do not rely on line of sight to operate (e.g., see FIGs. 4-5). The sensors 402, 404 may be attached to the bones 410, 414 and/or implant of the user/patient and/or be wearable by the user/patient. Wireless communication may include sending the orientation data to the monitor 450.
[0034] FIG. 5 illustrates an example communication system 500 to communicate the sensors’ 502-510 motion and/or orientation data. In some examples, orientation and motion information 516 from sensors 502-510 may be wirelessly transmitted to a remote station 540, which may be re-formatted into USB serial data 516 to be re-transmitted to a mobile device 550 of the user/patient for processing and/or storage. The mobile device 550 of the user may utilize a WebSocket protocol to process the received orientation data 516 for viewing on a browser, and may forward the data 520 in WebSocket protocol to remote processing and/or data storage servers, e.g., the cloud 570. In some examples, data from the cloud 570 may be accessed by other entities, e.g., a device of a doctor and/or physical therapist, e.g., via a portal, application, or application programming interface (API). The device 580 of the doctor and/or physical therapist can be used to send additional information to the mobile device 550 of the user/patient, e.g., for viewing by the user/patient 540. The mobile device 550 may send instant visual and/or audio feedback 518 to the user/patient based on the sensors’ transmitted orientation data.
[0035] In some examples, data 516 from the sensors 502-510 may integrate with electronic medical records systems and with other application-specific software that assists the clinician to customize knee implants. In the rehabilitation application, cloud-based software can be used to remotely monitor patient compliance. A software as a service (SaaS) model, for example, may allow clinics to receive ongoing services and amortize the cost of the hardware over time. As a result, users, doctor, and physical therapists are insulated from the intricacies of serial communications and managing raw data packets originating from the remote station 540.
[0036] Better outcomes can lead to happier patients, increased word of mouth referrals, fewer revisions, and less drain on the health care dollars. Having objective numbers can also allow for concrete conversations with patients. It also gives patients data on the point they started, what they are improved to, and numbers to achieve for a full recovery. Accurate information may also allow research data to be repeatable and comparable.
[0037] In another example, the sensors 502-510 wirelessly transmit orientation data in the form of quaternions to a remote station 540. The remote station 540 may be plugged into a desktop as a server via USB communication.
[0038] Referring to the IMU 110 used in the sensor 102, it may be a low-cost MEMS
(Micro Electrical Mechanical System) 9-axis IMU, made up of a 3-axis magnetometer 116 that measures the X’, Y’, and Z’ coordinates according to a magnetic field’s direction vector (North pole) of the IMU in the local frame, a 3-axis accelerometer 112 that measures the X’, Y’, and Z’ coordinates according to an acceleration of the IMU in the local frame, and a 3-axis gyroscope 114 that measures the X’, Y’, and Z’ coordinates of an angular velocity of the IMU in the local frame. Note that the 9-axis IMU is sometimes also known as a MAG (Magnetic, Angular Rate and Gravity) sensor. [0039] Regarding the 3-axis magnetometer 116, in an absence of any magnetic or ferromagnetic materials in its environment, the 3-axis magnetometer 116 would measure the three components (in X’, Y’, and Z’ in local frame) of the Earth’s magnetic field’s vector. It is worth noting that this vector is not parallel to the ground. Rather, it has an inclination (for example, -64° in Columbus, Ohio) with respect to the ground having the X axis pointing to the north pole, the Z axis pointing up, and the Y axis pointing out of the page (West direction) with reference to the world frame. In practice, an improved steepest decent method modified from Madgwick’s fusion algorithm will be used to immune from the effects of the inclination angle.
[0040] In practice, the environment in which the IMU 110 is placed may have magnetic and ferromagnetic materials which will modify the Earth’s magnetic field at the location of the IMU 110. Therefore, the measured North pole direction will not coincide with the Earth’s North pole direction. This will not pose any problem for the sensors if two conditions are met: (i) the resultant magnetic field is uniform (the strength of the magnetic field has no detrimental effect on the sensors as long as it is uniform), (ii) all the sensors being used together are in the same magnetic environment, and therefore they all see the same magnetic field (i.e. the same World frame).
[0041] However, even if the above two conditions (i) and (ii) are met, it may happen that the magnetic field varies with time because of the environment. These causes may be a result of switching of electrical machinery (like motors), people moving around in a vicinity of strong magnets, or large ferromagnets placed very close to the magnetometer 116. Effects like these will produce variations in the measured magnetic field with respect to time. Such variations are known as magnetic noise. It is not a defect in the magnetometer; it is simply noise picked up from the environment. One of the goals of the factory preset IMU specific calibration parameters 126 is to minimize the effect of magnetic noise.
[0042] Presence of magnetic and ferromagnetic materials can distort the Earth’s magnetic field to become non-uniform which causes erroneous readings on the magnetometer 116. Therefore, magnetic and ferro-magnetic materials should be kept at least two feet away from the magnetometer 116 sensor when in use. Furthermore, there may be distortions caused by metals fixed on the magnetometer 116 sensor which are in close proximity to the IMU 110, like soldering material, and electronic components on the PCB such as the ANT radio 106, MCU 120 and LDO (Low Dropoff Regulator). The attached Li ion battery 105 may also have ferromagnetic components. Thus, the factory calibration parameters 126 may correct the Magnetometer 116 to correct for these distortions.
[0043] The accelerometer 112 measures the three components (in X’, Y’, and Z’ in local frame) of the linear acceleration experienced by the IMU 110. The accelerometer may be used to indicate an UP direction, i.e., the Z axis of the world frame. When the accelerometer is at rest (i.e., static to semi-static condition), it only experiences an acceleration due to gravity which points downward, and the calibration parameter 126 to correct an UP direction may be determined by taking a negative of the measured acceleration.
[0044] When the accelerometer 112 undergoes dynamic translational changes or rotation then it experiences additional linear and centripetal accelerations on top of the acceleration due to gravity, and the total acceleration measured by the accelerometer no longer points in the downward direction. Thus, under dynamic conditions, the negative of the measured acceleration can give us a false UP direction as shown in the next figure. This error in the UP direction may be termed as accelerometer noise.
[0045] FIG. 6 illustrates gyroscope’s uncorrected errors, noise which causes diffusion in sensor’s orientation and offset which causes drift in sensor’s orientation, at initial power up in an at rest condition. Typically, off-the-shelf accelerometers 112 suffer from offset and scale errors. These errors are reduced via the one-time factory calibration parameters 126; the user need not be concerned about these errors. A three axis gyroscope 114 may measure the three components (X’, Y’, and Z’ in local frame) of an angular velocity w of the IMU 110. Several factors given below need to be considered to handle data coming from the gyroscope 114:
[0046] (i) Noise which causes diffusion in sensor’s orientation, 606 (see FIG. 6): Off-the- shelf gyroscopes suffer from noise which causes diffusion in sensor’s orientation. As shown in FIG. 6, gyroscope 114 may exhibit a noise magnitude of around ±0.5 degrees/second. The effect of this noise 606 needs to be corrected in the calibration parameter 126. Otherwise, the calculated sensor orientation data would diffuse (random walk) away with time from the correct orientation value. The calibration parameter 126 may correct for this diffusive behavior at run time in the fusion algorithm 126, using a frequency dependent parameter named b , as drift and diffusion correction factors. [0047] (ii) Offset, which causes drift in sensor’s orientation, error 614 (see FIG. 6):
Gyroscope 114 may also suffer from fixed offset errors in the measured values of angular velocities. As shown, this offset 604 may lead to drift errors in orientation, errors which increase with time linearly. The gyroscope 114 should ideally measure the angular velocity components to be zero. However, the measured results of IMU 110 may show a constant offset error of -3 degrees.
[0048] The error readings of noise which causes diffusion in sensor’s orientation 606 and offset which causes drift in sensor’s orientation, is individually measured at the factory which is specific to each IMU, which may be stored as calibrate parameter 126 in a persistent memory of the MCU to correct the orientation data to exhibit close to zero noise and close to zero offset as shown in line 604 over a time period.
[0049] FIG. 7 illustrates gyroscope’ s uncorrected errors as shown in FIG. 6, during a warm up period in an at rest condition. The gyroscope (114) may need a warm up time of the order of 10 to 15 minutes before its cold offset value 702 settles to a warm offset value 704 to steady state values. Typically, it may be difficult to see this warmup behavior since it is often masked by noise. Therefore, some filtering of the data is required to observe this effect. Note how the noise is masking the warmup behavior in the top graph 700 in FIG. 7, and how the warmup is revealed in the bottom graph 710 (where a two second averaging filter is applied to the data) which clearly shows the initial warm-up time 702, before the results settled down to value 704. As discussed later, this type of warmup behavior dictates that the factory calibration for offset should be done after warmup and not when the gyroscope 114 is cold, otherwise significant offset error could occur in gyro data after warmup times, which would lead to appreciable drift. FIG. 7 shows the two different values (701 and 702) of offset that would be obtained if the offset were calculated when the sensor was cold (-3.2 degrees in lower circle-dashed line 702) and when the sensor was warmed up for 15 minutes (-3.0 degrees in upper asterisk-dashed line 701).
[0050] Examples of Drift and Magnetic Noise: Since drift plays such an important role in evaluating the performance of an orientation sensor, drift is defined and illustrated with some real- world examples obtained from the sensors. Drift calibration may be achieved in a fusion algorithm 124. Drift may be caused by the offset error in the gyroscope (if appropriate calibration procedures are not followed). [0051] FIG. 8 A illustrates Euler angles qc, qg, and qz obtained from the sensor’s orientation quaternion q over a battery life, in an at rest condition. The sensor 102 is fastened to a fixed position at least two feet away from any ferromagnetic and magnetic materials. The drift test was started after a 15-minute warmup and the orientation quaternions being periodically read and having corrected with the calibration firmware 126 stored in the persistent memory. As shown in graphs 802, 804 and 806 of FIG. 8A, the calculated Euler angles qc, 0y, and qz remained constant except for noise fluctuations after over 7.5 hours of continuous operations until the battery starts to deplete. It is shown that a total drift of less than 0.25 degrees in all the three Euler angles qc, 0y, and 0Z, this translates to less than 0.035 degrees/hour of drift.
[0052] FIG. 8B illustrates Euler angles qc, qg, and qz obtained from the sensor’s orientation quaternion in the presence of ferromagnetic interference to demonstrate magnetometer’s distortion caused by magnetic noise after a warm up period, in an at rest condition. Graphs 810 and 830 shows an improvement in magnetic noise in y Euler data and x Euler data after compensation. Graph 820 exhibits jumps in z Euler data due to magnetic noise caused by the presence of or within two feet of a ferromagnetic material. Thus, the magnetometer’ s 116 in the sensor 102 should be kept away from ferromagnetic and magnetic materials at a predefined minimum distance of at least two feet.
[0053] FIG. 9 illustrates using a frequency dependent beta b for drift and diffusion corrections to calibrate a sensor 102 under semi-static state and dynamic state. In an example, the frequency dependent b may include using either a bi or a b2 depending on whether or not a sensor’ s angular frequency w is greater than a cross-over angular frequency u>c as shown in FIG. 9, wherein bi > b2. More specifically, when sensor’s w < a>c in a semi-static state (i.e., stationary to slow motion), bi may be used to compensate for gyroscope drift and diffusion contributions. In the semi-static state, the sensor 102 (having the magnetometer 116) should be kept at a predefined minimum distance away from ferromagnetic and magnetic materials. When sensor’s w > we in a dynamic motion state, b2 may be used to compensate both magnetometer noise and accelerometer noise contribution in the sensor 102.
[0054] The use of at least two ?s in a frequency dependent sensor’s motion condition is to make the sensor 102“snappier.” In an example, bi > 1 for low frequencies (w < u>c) and b2= 0.1 for higher frequencies (w > u>c), where w0 = 0.143 rad/sec for the cross over frequency. The reason for this approach can be summarized as follows: (a) in a case when w < w which may be a semi-static condition, accelerometer noise should be negligible, therefore a larger value of /? (e. g. , bΐ = 1) can better take care of the gyro divergent errors (diffusion and drift). However, for larger values of b, the sensor 102 would need to be kept at a predefined distance (e.g., at least two feet away) away from magnetic and ferromagnetic materials to avoid magnetic noise. To keep magnetic noise low, b should remain less than or equal to 1; (b) in a case when w > a>c which may be a dynamic state condition, a smaller b (e.g., b2= 0.1) may be used to avoid the higher frequencies accelerometer noise and overall noise in orientation problems.
[0055] The smaller b may make the sensor motion appear smooth as they are in motion.
The smoothness may be because of less accelerometer noise, and because of less overall noise in the orientation data. However, if the sensor 102 is moved too vigorously, then the orientation quaternion might move away from a minimum of the objective function where the correct solution for the orientation may be when the orientation quaternion is at a minimum of an obj ective function /to be explained later. In a case of using a single-/? solution instead of the proposed at least a two- b solution, if the sensor 102 stops moving, then the small single value of b would take a longer time to reach the minimum due to sluggishness of the sensor 102 Therefore, if using the two- b scheme, as soon as the sensor 102 stops moving, the sensor’s b region would transition from a smaller /? (b2= 0.1) to a larger /? (bΐ = l) and the sensor’s orientation would quickly snap back to the right answer at the minimum of the objective function. Therefore, using the two-/? scheme makes the sensor 102 be more responsive and more agile in orientation calculation, making the orientation readings snappier and smoother to deal with the exceptional semi static correction of gyro divergent errors due to diffusion and drift. Note that the same two b values may be used for all the sensors 102, and the two b values in the calibration parameters are not unique or specific to a given sensor.
[0056] FIG. 10 is an example mapping of the sensor’s frame (local frame) to a World frame. FIG. 11 is an example fusion algorithm flow chart to measure orientation in a sensor 102 The MCU 120 of the sensor 102 may modify Madgwick’s fusion algorithm using a frequency dependent b to calculate the orientation quaternion in fixed point. For the sake of convenience, Madgwick’s terminology and symbols may be adopted to shown the orientation calculations. A World Frame with the letter E (Earth) and a Sensor’s local frame by the symbol S may be adopted. [0057] The Earth’s Frame is fixed to the earth and has coordinates, Ex, Ey, Ez. The orientation of the sensor 102 is defined as the quaternion fq which is a rotation that aligns the Sensor’s frame with the Earth’s frame.
Figure imgf000016_0001
The hat L symbol is used to denote a unit quaternion. The job of the fusion algorithm is to periodically calculate the orientation, f q , after each time step, At. The value of At is dictated by a sampling rate at which raw data can be read from the IMU’s sensors (accelerometer 112, gyroscope 114, and magnetometer 116). The orientation, fq , can be calculated in two alternate ways: (i) by using the gyroscope 114 raw data alone, or, (ii) by using the raw data obtained from the magnetometer 116, accelerometer 112 (M, A) pair.
[0058] As described in prior FIGs. 8-9, using the gyroscope alone may lead to diffusion and drift errors, while using the (M, A) pair by itself will make the sensor 102 sensitive to accelerometer and magnetic noise. In this example, the orientation, f q calculation may be obtained from steps (i) or (ii) by modifying Madgwick’s fusion algorithm using a frequency dependent b value, which can be summarized by the flow chart of FIG. 11 as follows:
[0059] Step 1102: starting from a fully calibrated sensor by reading the calibration parameters 126 from persistent memory. Steps 1104-1106: calculate the initial orientation using the (M, A) pair, then perform the following operations periodically after each time step (or time interval) At. Start sampling raw data from the magnetometer 116, accelerometer 112 and gyroscope 114 at each time interval At, correct the raw data using the calibration parameters. Step 1106: calculate the change in orientation by using the gyroscope raw data only. Step 1108, wait for the next sampled raw data within the time interval At. The above steps 1106-1108 will lead to small diffusion errors (explained earlier) in orientation due to gyro noise, and some drift due to gyro offset even in a calibrated gyroscope 114.
[0060] Step 1110-1112: correct the orientation for these gyro errors by calculating the change in orientation by using the (M, A) pair. The amount of this (M, A) change is weighted by at least two parameters, ?s (explained earlier). For typical sensors, the errors introduced by using the gyroscope 114 raw data alone from step (i) are small in the time step, At, so a small value of b <1 may be enough to correct for the gyro errors. Keeping the value of b small has the advantage of reducing the dependence of orientation on accelerometer and magnetic noise. The weighing parameters, (for example, bi= 1.0, b2= 0.1) may be determined empirically to be the best noise and accuracy in the calculated orientation. Go to steps 1108-1112, and continue indefinitely to correct gyro errors.
[0061] The operations of steps 1108-1112 may be explained in more detail in the following sections. In practice, the sequence of carrying out the steps may be to first look at orientation calculated from gyro data (i), and then carry out the calculations using the (M, A) pair (ii), followed by fusing the two results from steps (i) and (ii) using one of the two values of b (bi= 1.0) and (b2= 0.1).
[0062] Regarding step 1108 in the orientation calculation from gyroscope data: a gyroscope 114 generates three components of the angular velocity, wc, wy, and wz. The corresponding angular acceleration quaternion, 5w, may be generated as blow equation (2): o)— [0, ύ c, o y, wz] (2) from which a rate of change of orientation can be obtained with respect to time (O indicates quaternion multiplication) as shown in equation (3).
Figure imgf000017_0001
The subscript w indicates that (in this section) the orientation may be obtained from gyro data only, and the subscript t indicates time. To evaluate the orientation, |qW t, from |qW t it is necessary to numerically integrate this equation (3) with respect to (w.r.t) time t. To accomplish the integration, 5<wt is sampled periodically at times t, t + At, t + 2 At, ... (henceforth, t, t + 1, t + 2, ... as short-hand for t, t + At, t + 2 At, ...). The integration is performed by estimating the orientation, f qest,w,t, at time t by using the previously estimated value of the orientation, ¾ est w tiat time t 1· With the passage of time, new terms are added to older terms and a sequence of sums is built up to perform the integration numerically through equations (4)-(5):
Figure imgf000017_0002
_ 5^-
E lest,co,t — E q est,t-l + f< At (5) Note that an initial value of the orientation, q est t=0 , is to get an iterative sum started. However, the gyroscope has no way of determining this initial orientation since it can only give changes in the orientation with respect to time. The (M, A) pair is used to determine the initial orientation to get the above integration started. How the (M, A) pair calculates orientation will be explained.
[0063] If the gyroscope data had no noise or offset errors, then the above integration algorithm is all that would be required to get accurate sensor’s orientation (provided the time interval, At, is chosen appropriately small). However, the presence of noise and offset in the gyroscope data cause errors in the orientation, f qw t, in the form of diffusion and drift.
[0064] Diffusion errors: For simplicity, assume that the gyroscope data has noise but no offset. If the sensor is at rest (i.e., semi-static or static state), then ideally the orientation will not change with time since 5u>t would be zero at all times. However, noise in the gyroscope will make the orientation move randomly and diverge away from the initial position in a random manner. This random motion is like diffusion (or random walk) present in many physical phenomena.
[0065] Similarly, noise in the gyroscope may cause the orientation of the gyroscope 114 to diffuse and diverge away from its starting value, even if the gyroscope 114 is at rest. Therefore, gyro noise will cause the orientation to diverge away from the true orientation of the sensor. The larger the value of the gyro noise, the larger will be the extent of diffusion. The (M, A) pair and an appropriate value of b will be used in the fusion algorithm to correct for diffusion which is caused by noise in the gyroscope 114. Note that the gyroscope 114 does not know the true orientation, it only knows the change of orientation with respect to time; therefore a gyroscope 114 has no way of correcting for diffusion on its own, it needs the help of the (M, A) pair to correct for diffusion. Diffusion (caused by noise in the gyroscope 114) is also known as divergent because it takes the orientation of the sensor 102 away from the correct value.
[0066] Drift error: If the gyroscope data has a constant offset error then the integration discussed above results in an error in orientation, |qW t, which increases linearly with time. This error is known as drift. As a one-dimensional example, integration over an offset error of constant, c, would lead to ct, a term linear in time, t, related by equation (6):
rt
J Q c dt = ct. (6) This drift error may be corrected by calibrating the gyroscope to remove the offset from the orientation data as described above. If the drift is small, then a small value of b can be used to handle it. In general, using large values of b to remove drift are best avoided since that makes the orientation more susceptible to accelerometer and magnetometer noise. Note that even though the sensor is motionless (semi-static or static state), the calculated orientation is increasing linearly with time. Drift makes stationary objects appear to rotate at a constant angular velocity, equal to a slope of a drift line.
[0067] Orientation calculation from the (M, A) Pair Data: In this section it will be described how the data from the magnetometer and accelerometer pair can alternately be used to calculate the orientation. Madgwick’s paper provided that the rate of change of orientation with respect to time, f qe t, be expressed as equation (7) below:
Figure imgf000019_0001
[0069] Here, e indicates that data from the (M, A) pair was used to determine the rate of change of orientation. / is a function of the orientation quaternion, fqe t, known as the objective function, Vf is the gradient of /, and || Vf\\ denotes the magnitude (length) of Vf. The objective function / is a fairly complicated function explained fully in Madgwick’s paper, where it is shown that minimizing this function with respect to |qe t determines the orientation of the sensor. Since V is the gradient operator, V//|| Vf || points in the direction of the minimum, and becomes zero at the minimum, at which point |qe t becomes the correct value of the orientation. So, determining the value of the orientation can be reduced to the problem of optimizing / (i.e. finding its minimum), which can be done by using V//|| Vf || in conjunction with the steepest descent method.
[0070] Recall that Vf has a direction which points in the direction of the minimum of the objective function. Because orientation is being calculated by using two sensors, (M, A) pair, Vf is composed of two directions. One direction is caused by changes in magnetometer data, and the other caused by changes in accelerometer data. To reach the minimum of / in the most optimal way, these two directions should be close to being orthogonal to each other, so that the minimum can be found (by the steepest descent method) in the least number of steps. [0071] It turns out that in Madgwick’s original paper, the way V/ is defined, these two directions are not orthogonal, which can lead to slow dynamic response, i.e., the system can take some time to reach the minimum of /. Madgwick’s non-orthogonal method is called the “original” steepest descent method.
[0072] An improvement is made in to Madgwick’s steepest descent method by using orthogonal directions. More specifically, the magnetic and accelerometer directions of V/ are orthogonal to each other. This can lead to significant improvement in reducing the number of steps required to reach the minimum, therefore leading to better dynamic response. This method is called the“improved” steepest descent method.
[0073] In practice, a full steepest descent minimization using the (M, A) pair is performed only at the beginning of the algorithm, to set the right initial conditions for the integrations in the algorithm. After this step, one could simply (numerically) integrate f qe t using the equation given above, and determine the orientation at the sampling times t, t + 1, t + 2 ... However these results would be overly susceptible to magnetic and accelerometer noise. Instead, as described in the next section, the fused orientation results may be obtained from the gyroscope 114 with those obtained from the (M, A) pair.
[0074] The modification to the fusion algorithm includes fusing gyroscope and (M, A) Pair
Data by using a frequency dependent value of b. The fusion of the gyro and (M, A) data is obtained by combining the rate of change of orientation obtained from the gyroscope, f qw t , eq. (4), with the rate of change of orientation obtained from the (M, A) pair, f qe t, eq. (7) as follows:
Figure imgf000020_0001
This equation (8) is a numerical integration performed in the main loop of the fusion algorithm. The frequency dependent b is an adjustable parameter which glues the two results, i.e., gyroscope’s orientation data and orientation obtained from the (M, A) pair together. When b = 0 pure gyro results are obtained, eq. (5), and when b is very large (bi= 1.0) then the second term in brackets of eq. (8) becomes the larger term and the (M, A) pair term dominates; eq. (8) reduces to a numerical integration of eq. (7).
[0075] It will be shown in FIG. 11, that starting from a fully calibrated sensor 102 the fusion algorithm may calculate the initial condition (see steps 1102, 1104 in FIG. 11) for the integration in eq. (8) by using the multiple step steepest descent method (using M, A pair data), until convergence is obtained. Then periodically calculate the change in orientation by using (8). The integration of the term |qW t At in eq (8) will lead to diffusion errors due to gyro noise, and may also contain a small amount of drift due to gyro offset, even in a calibrated gyroscope (gyro calibration is discussed later); where both of these errors are divergent as explained earlier. The term b f q e tAt in eq. (8) is convergent, i.e., it adjusts the orientation in the correct direction, since it is proportional to V//|| V/||, eq. (7), and is directed towards the minimum of the objective function/ i.e., the correct orientation.
[0076] Note that the frequency dependent b directly controls the size of the convergent correction. Keeping b small (b2= 0.1) has the advantage of reducing the dependence of orientation on accelerometer and magnetic noise. On the other hand, keeping b large has the advantage of correcting the divergent errors better.
[0077] A large value of b (bi> 1.0) also has the disadvantage of increasing the noise in the calculated orientation. This can be explained as follows: After the initial full steepest descent calculation, the system comes to the minimum of the objective function. For the rest of the integration loop, the answer remains near this minimum. The divergent terms derive the system away from the minimum, while the convergent terms bring it in the direction of the minimum. However, the convergent team is scaled by /?, and depending on the value of b the system will overshoot the minimum by some amount. Thus, the calculated orientation will“rattle around” the minimum with the passage of time. This“rattling around” because of overshoot will appear as noise in the calculated orientation data. Therefore, larger values of b lead to an increased noise in orientation.
[0078] It is shown through the above described methods and systems that the IMU 102
(e.g., 9-axis inertial modules by ST Microelectronics of Geneva, Switzerland) can achieve high performance with almost zero drift over its useful battery life find applications in those that are typically only seen with higher grade IMUs.
[0079] The improvement in drift performance on low grade IMUs can be achieved through performing in the following sequence as shown in FIG. 11 : warming up the IMU 102, calibrating the gyroscope raw data using factory pre-installed IMU specific calibration data, afterwards executing a fast converging fusion algorithm to calculate orientation information from the raw data sampled from the IMU 110. It is shown that Madgwick’s fusion algorithm which is used to calculate orientation in quaternions from the corrected raw data of the IMU, may be improved to respond with more agility and to converge the orientation information faster to minimum drift errors. More specifically, the improved fusion algorithm uses a frequency dependent b to glue the gyroscope’s calibrated raw data and the (M, A) pair calibrated raw data together in orthogonal directions to perform a steepest descent method to a fast convergence to a minimum solution.
[0080] Furthermore, it is shown that the disclosed sensor 102 is able to achieve a long battery life through the use of an ultra-low power MCU 120 (e.g., MSP430 microcontroller by Texas Instruments of Dallas, Texas, U.S.A.) by carrying out fixed point calculations on the fast convergence fusion algorithm.
[0081] The sensor 102 formed by a pair lower grade IMU 110 and ultra-low power MCU
120 sensor find many high-valued applications such as knee arthroplasty, prosthesis apparatuses, robotics, computer-guided surgical procedure, posture corrections in rehabilitation and physical therapy, in augmented reality or possibly remote spacewalk repairs, which opens to an unlimited larger“total addressable market”.
[0082] Other systems, methods, features, and advantages will be, or will become, apparent to one with skill in the art upon examination of the figures and detailed description. It is intended that all such additional systems, methods, features, and advantages be included within this description, be within the scope of the disclosure, and be protected by the following claims.

Claims

CLAIMS What is claimed is:
1. A method for measuring orientations in a sensor, the method comprising:
sampling within a periodic time interval At by an inertial measurement unit (IMU), a plurality of IMU data samples generated by a magnetometer, an accelerometer, and a gyroscope comprised in the IMU, wherein each IMU data sample is oriented according to a local frame in the IMU; sending the plurality of IMU data samples to a microcontroller unit (MCU) to generate a corresponding plurality of sensor’s orientation data samples through operations, comprising: reading from an on-board persistent memory, calibration parameters to correct each of the plurality of IMU data samples to stay within a pre-defmed range of orientation accuracy and/or drift variations; executing an on-board fusion algorithm in a fixed point to rotate by an amount, to align the local frame of each corrected plurality of IMU data samples to match a world frame, and to transform the corrected plurality of IMU data samples to the corresponding plurality of sensor’s orientation data samples; and transmitting the corresponding plurality of sensor’s orientation data samples to a remote station.
2. The method according to claim 1, wherein the local frame in the IMU forms a right handed X’, Y’, and Z’ coordinate system with respect to a common origin O’, and directions of X’ and Y’ axes are pre-oriented on the IMU, and wherein the world frame forms a right-handed X, Y, and Z coordinate system with respect to a common origin O, with an x-axis that points in a magnetic North direction, a z-axis that points upward with respect to the earth ground, and a y-axis that is perpendicular to an XZ plane.
3. The method according to claim 1, wherein the magnetometer is a 3-axis magnetometer that measures the X’, Y’, and Z’ coordinates according to a magnetic field’s direction vector (North pole) of the IMU in the local frame, wherein the accelerometer is a 3-axis accelerometer that measures the X’, Y’, and Z’ coordinates according to an acceleration of the IMU in the local frame, and wherein the gyroscope is a 3-axis gyroscope that measures the X’, Y’, and Z’ coordinates of an angular velocity of the IMU in the local frame.
4. The method according to claim 1, wherein the calibration parameters are IMU specific, are determined at factory and stored in a persistent memory as firmware to be read by the MCU to correct the IMU data samples generated by the magnetometer, the accelerometer, and the gyroscope, prior to the alignment of the local frame to the world frame.
5. The method according to claim 1, wherein the rotation of the local frame to align with the world frame are calculated in quaternions, and the corresponding plurality of sensor's orientation data samples are transmitted as quaternions to avoid gimbal lock.
6. The method according to claim 1, wherein the sensor is an integrated package device to include at least the IMU, the MCU, the persistent memory, a transceiver, and a battery.
7. The method according to claim 1, comprising using two drift and diffusion correction factors bi and b2 in the calibration parameters, wherein bi > b2, and bi is used to compensate for gyroscope drift and diffusion contributions in a semi-static state where the IMU is to be kept away from ferromagnetic and magnetic materials at a predefined minimum distance, and where a measured angular frequency w is less than or equal to a cross-over angular frequency coc, and b2 is used to compensate for both magnetometer noise and accelerometer noise contribution in a dynamic motion state where the measured angular frequency w is greater than the cross over angular frequency coc.
8. The method according to claim 1, wherein orientation accuracy is improved by performing a steepest descent operation to find a minimum of an orientation function f through determining a rate of change of orientation quaternion by comparing a current IMU data sample’s orientation quaternion at time t with a previous IMU data sample’s orientation quaternion at time t-1 in order to determine an amount of rotation required to offset the orientation quaternion to generate the sensor’s orientation data sample.
9. The method according to claim 8, wherein the steepest descent operation is iteratively performed on all subsequent IMU data samples’ orientation quaternions within the time interval At, to generate corresponding subsequent sensor’s orientation data samples.
10. The method according to claim 1, wherein IMU’s gyroscopic drift correction is offset after a predetermined warm up time.
11. A system for measuring orientations, the system comprises:
an inertial measurement unit (IMU) which samples within a periodic time interval At, a plurality of IMU data samples generated by a magnetometer, an accelerometer, and a gyroscope comprised in the IMU, wherein each IMU data sample is oriented according to a local frame in the IMU; a microcontroller unit (MCU) which receives the plurality of IMU data samples to generate a corresponding plurality of sensor’s orientation data samples, wherein: the MCU reads from an on-board persistent memory, calibration parameters to correct each of the plurality of IMU data samples to stay within a pre-defmed range of orientation accuracy and/or drift variations; the MCU executes an on-board fusion algorithm in a fixed point to rotate by an amount, to align the local frame of each corrected plurality of IMU data samples to match a world frame, and to transform the corrected plurality of IMU data samples to the corresponding plurality of sensor’s orientation data samples; and a transceiver that transmits the corresponding plurality of sensor's orientation data samples to a remote station.
12. The system according to claim 11, wherein the local frame in the IMU forms a right handed X’, Y’, and Z’ coordinate system with respect to a common origin O’, and directions of X’ and Y’ axes are pre-oriented on the IMU, and wherein the world frame forms a right-handed X, Y, and Z coordinate system with respect to a common origin O, with an x-axis that points in a magnetic North direction, a z-axis that points upward with respect to the earth ground, and a y-axis that is perpendicular to an XZ plane.
13. The system according to claim 11, wherein the magnetometer is a 3-axis magnetometer that measures the X’, Y’, and Z’ coordinates according to a magnetic field’s direction vector (North pole) of the IMU in the local frame, wherein the accelerometer is a 3-axis accelerometer that measures the X’, Y’, and Z’ coordinates according to an acceleration of the IMU in the local frame, and wherein the gyroscope is a 3-axis gyroscope that measures the X’, Y’, and Z’ coordinates of an angular velocity of the IMU in the local frame.
14. The system according to claim 11, wherein the calibration parameters are IMU specific, are determined at factory and stored in a persistent memory as firmware to be read by the MCU to correct the IMU data samples generated by the magnetometer, the accelerometer, and the gyroscope, prior to the alignment of the local frame to the world frame.
15. The system according to claim 11, wherein the rotation of the local frame to align with the world frame are calculated in quaternions, and the corresponding plurality of sensor's orientation data samples are transmitted as quaternions to avoid gimbal lock.
16. The system according to claim 11, wherein the sensor is an integrated package device to include at least the IMU, the MCU, the persistent memory, a transceiver, and a battery.
17. The system according to claim 11, comprising using two drift and diffusion correction factors bi and b2 in the calibration parameters, wherein bi > b2, and bi is used to compensate for gyroscope drift and diffusion contributions in a semi-static state where the IMU is to be kept away from ferromagnetic and magnetic materials at a predefined minimum distance, and where a measured angular frequency w is less than or equal to a cross-over angular frequency coc, and b2 is used to compensate for both magnetometer noise and accelerometer noise contribution in a dynamic motion state where the measured angular frequency w is greater than the cross over angular frequency coc.
18. The system according to claim 11, wherein orientation accuracy is improved by performing a steepest descent operation to find a minimum of an orientation function f through determining a rate of change of orientation quaternion by comparing a current IMU data sample’s orientation quaternion at time t with a previous IMU data sample’s orientation quaternion at time t-1 in order to determine an amount of rotation required to offset the orientation quaternion to generate the sensor’s orientation data sample.
19. The system according to claim 18, wherein the steepest descent operation is iteratively performed on all subsequent IMU data samples’ orientation quaternions within the time interval Dΐ, to generate corresponding subsequent sensor’s orientation data samples.
20. The system according to claim 11, wherein IMU’s gyroscopic drift correction is offset after a predetermined warm up time.
PCT/US2019/066303 2018-12-13 2019-12-13 System and method for motion based alignment of body parts WO2020123988A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862779047P 2018-12-13 2018-12-13
US62/779,047 2018-12-13

Publications (1)

Publication Number Publication Date
WO2020123988A1 true WO2020123988A1 (en) 2020-06-18

Family

ID=71077517

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/066303 WO2020123988A1 (en) 2018-12-13 2019-12-13 System and method for motion based alignment of body parts

Country Status (1)

Country Link
WO (1) WO2020123988A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829516A (en) * 2020-07-24 2020-10-27 大连理工大学 Autonomous pedestrian positioning method based on smart phone
CN117288187A (en) * 2023-11-23 2023-12-26 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9068843B1 (en) * 2014-09-26 2015-06-30 Amazon Technologies, Inc. Inertial sensor fusion orientation correction
US20160327396A1 (en) * 2015-05-08 2016-11-10 Sharp Laboratories of America (SLA), Inc. System and Method for Determining the Orientation of an Inertial Measurement Unit (IMU)
US20160363460A1 (en) * 2015-06-12 2016-12-15 7725965 Canada Inc. Orientation model for inertial devices
US20170273665A1 (en) * 2016-03-28 2017-09-28 Siemens Medical Solutions Usa, Inc. Pose Recovery of an Ultrasound Transducer
US20170357332A1 (en) * 2016-06-09 2017-12-14 Alexandru Octavian Balan Six dof mixed reality input by fusing inertial handheld controller with hand tracking
US20170363423A1 (en) * 2016-09-09 2017-12-21 Nextnav, Llc Systems and methods for calibrating unstable sensors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9068843B1 (en) * 2014-09-26 2015-06-30 Amazon Technologies, Inc. Inertial sensor fusion orientation correction
US20160327396A1 (en) * 2015-05-08 2016-11-10 Sharp Laboratories of America (SLA), Inc. System and Method for Determining the Orientation of an Inertial Measurement Unit (IMU)
US20160363460A1 (en) * 2015-06-12 2016-12-15 7725965 Canada Inc. Orientation model for inertial devices
US20170273665A1 (en) * 2016-03-28 2017-09-28 Siemens Medical Solutions Usa, Inc. Pose Recovery of an Ultrasound Transducer
US20170357332A1 (en) * 2016-06-09 2017-12-14 Alexandru Octavian Balan Six dof mixed reality input by fusing inertial handheld controller with hand tracking
US20170363423A1 (en) * 2016-09-09 2017-12-21 Nextnav, Llc Systems and methods for calibrating unstable sensors

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MADGWICK ET AL.: "Estimation of IMU and MARG orientation using a gradient descent algorithm", 2011 IEEE INTERNATIONAL CONFERENCE ON REHABILITATION ROBOTICS (ICORR),, 1 July 2011 (2011-07-01), XP032318422, Retrieved from the Internet <URL:https://fardapaper.ir/mohavaha/uploads/2019/08/Fardapaper-Estimation-of-IMU-and-MARG-orientation-using-a-gradient-descent-algorithm.pdf> [retrieved on 20200207] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829516A (en) * 2020-07-24 2020-10-27 大连理工大学 Autonomous pedestrian positioning method based on smart phone
CN111829516B (en) * 2020-07-24 2024-04-05 大连理工大学 Autonomous pedestrian positioning method based on smart phone
CN117288187A (en) * 2023-11-23 2023-12-26 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium
CN117288187B (en) * 2023-11-23 2024-02-23 北京小米机器人技术有限公司 Robot pose determining method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11660149B2 (en) Method and apparatus for intraoperative measurements of anatomical orientation
US9351782B2 (en) Medical device motion and orientation tracking system
US9757051B2 (en) Muscular-skeletal tracking system and method
US10646157B2 (en) System and method for measuring body joint range of motion
US11481029B2 (en) Method for tracking hand pose and electronic device thereof
US6786877B2 (en) inertial orientation tracker having automatic drift compensation using an at rest sensor for tracking parts of a human body
US20140134586A1 (en) Orthopedic tool position and trajectory gui
Ong et al. Development of an economic wireless human motion analysis device for quantitative assessment of human body joint
Alves et al. Assisting physical (hydro) therapy with wireless sensors networks
Fall et al. Intuitive wireless control of a robotic arm for people living with an upper body disability
US10845195B2 (en) System and method for motion based alignment of body parts
WO2006126350A1 (en) Encapsulated medical device
US10821047B2 (en) Method for automatic alignment of a position and orientation indicator and device for monitoring the movements of a body part
WO2020123988A1 (en) System and method for motion based alignment of body parts
CN109620104A (en) Capsule endoscope and its localization method and system
EP3325916B1 (en) Method and apparatus for unambiguously determining orientation of a human head in 3d geometric modeling
Cotton et al. Wearable monitoring of joint angle and muscle activity
Li et al. Upper body pose estimation using a visual–inertial sensor system with automatic sensor-to-segment calibration
JP2011033489A (en) Marker for motion capture
Taylor et al. Forward kinematics using imu on-body sensor network for mobile analysis of human kinematics
KR20180096289A (en) Multi-dimensional motion analysis device system and the method thereof
Comotti et al. Inertial based hand position tracking for future applications in rehabilitation environments
Hsu et al. Automatically correcting and compensating measurement data of wearable inertial sensors for gait analysis
Mirella et al. Study of a 7-DoF Wearable Low-Cost IMU System Prototype to Measure Upper-Limb Movements
Mielczarek et al. Neck injury diagnostic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19896081

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19896081

Country of ref document: EP

Kind code of ref document: A1