WO2022051252A1 - Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit - Google Patents

Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit Download PDF

Info

Publication number
WO2022051252A1
WO2022051252A1 PCT/US2021/048365 US2021048365W WO2022051252A1 WO 2022051252 A1 WO2022051252 A1 WO 2022051252A1 US 2021048365 W US2021048365 W US 2021048365W WO 2022051252 A1 WO2022051252 A1 WO 2022051252A1
Authority
WO
WIPO (PCT)
Prior art keywords
velocity
robot
feature
optical flow
linear acceleration
Prior art date
Application number
PCT/US2021/048365
Other languages
French (fr)
Inventor
Zhanlue Zhao
Douglas Charles CARLSON
Yun Li
Bryan A. Cook
Original Assignee
Ceva Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ceva Technologies, Inc. filed Critical Ceva Technologies, Inc.
Priority to KR1020237009562A priority Critical patent/KR20230058652A/en
Priority to EP21864971.3A priority patent/EP4208316A4/en
Priority to JP2023514465A priority patent/JP2023540304A/en
Priority to US18/042,945 priority patent/US20230333560A1/en
Publication of WO2022051252A1 publication Critical patent/WO2022051252A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • B25J11/0085Cleaning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/243Means capturing signals occurring naturally from the environment, e.g. ambient optical, acoustic, gravitational or magnetic signals
    • G05D1/2437Extracting relative motion information
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/245Arrangements for determining position or orientation using dead reckoning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/10Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/50Internal signals, i.e. from sensors located in the vehicle, e.g. from compasses or angular sensors
    • G05D2111/52Internal signals, i.e. from sensors located in the vehicle, e.g. from compasses or angular sensors generated by inertial navigation means, e.g. gyroscopes or accelerometers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/50Internal signals, i.e. from sensors located in the vehicle, e.g. from compasses or angular sensors
    • G05D2111/54Internal signals, i.e. from sensors located in the vehicle, e.g. from compasses or angular sensors for measuring the travel distances, e.g. by counting the revolutions of wheels
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/60Combination of two or more signals
    • G05D2111/63Combination of two or more signals of the same type, e.g. stereovision or optical flow
    • G05D2111/65Combination of two or more signals of the same type, e.g. stereovision or optical flow taken successively, e.g. visual odometry or optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2111/00Details of signals used for control of position, course, altitude or attitude of land, water, air or space vehicles
    • G05D2111/60Combination of two or more signals
    • G05D2111/67Sensor fusion

Definitions

  • Embodiments of the subject matter disclosed herein relate to navigation in planar robots.
  • Planar robots move on planes and are utilized in applications such as robot vacuum cleaners.
  • Dead-reckoning navigation and simultaneous localization and mapping (SLAM) in planar robots utilize an inertial measurement unit (IMU), an optical flow (OF) sensor and wheel encoders (WE).
  • IMU inertial measurement unit
  • OF optical flow
  • WE wheel encoders
  • the IMU which typically contains an accelerometer and a gyroscope, provides the relative heading and linear acceleration of the planar robot.
  • the OF sensor and WE provide the speed of the planar robot.
  • Dead-reckoning is used to compute the trajectory of the planar robot by updating a current position estimate with the current speed and the previous position of the planar robot.
  • the OF sensor and WE can provide the speed of the planar robot, the measurements provided by the OF sensor and WE are prone to errors and failures. For example, an OF sensor can fail completely if the image quality of the OF is too low. In addition, wheel slips are a common cause of measurement errors for WE. For dead-reckoning navigation, accuracy of the speed estimates determines the position error growth rate; therefore, fusing sensors and obtaining accurate speed estimates are essential for navigating across heterogenous surfaces (e.g., rugs, tile, wood). Accordingly, there is still room for improvement in the navigation of planar robots.
  • heterogenous surfaces e.g., rugs, tile, wood
  • Exemplary embodiments are directed to systems and methods that provide for navigation in planar robots.
  • the linear acceleration from IMU and speeds from the OF sensor and WE are fused to achieve a robust and accurate dead-reckoning position estimate.
  • Bad sensor measurements are rejected, or both OF sensor and WE measurements are combined properly with respect to their accuracies.
  • Measurements from the OF sensor and WE may not be consistent with robot motion, and the errors in the OF sensor and WE depend on running conditions.
  • the linear acceleration of the IMU is always consistent with robot motion.
  • the speed derived from the linear acceleration suffers from a quickly growing integration error, this derived speed becomes comparable with the speeds measured by the OF sensor and WE without motion inconsistency errors in a feature domain through signal processing.
  • the weight of sensor measurements can be derived with respect to their accuracies.
  • a method for estimating a trajectory of a robot comprising: fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and applying Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
  • a robot configured for estimating a trajectory of a robot.
  • the robot comprising: at least one processor configured to fuse a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and the at least one processor configured to apply Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
  • FIG. 1 is a schematic representation of a robot according to an embodiment
  • FIG. 2 is flow chart illustrating a method for estimating a location of a robot according to an embodiment
  • FIG. 3 is a graph of DC-block filter frequency responses according to an embodiment
  • FIG. 4 is a representation of velocity fusion and location and trajectory estimation according to an embodiment
  • FIG 5 is a flowchart of a method for estimating a trajectory of a robot according to an embodiment.
  • a variable displayed in bold text denotes a matrix or vector. Whether the variable is a matrix or vector can be inferred from the context.
  • x stands for the quantity of interest, e.g., linear acceleration or velocity.
  • the quantity A is the frame or coordinate in which x is, e.g., user frame or inertial measurement unit (IMU) frame, and the quantity b is the location with which x is associated, e.g., where the IMU or optical flow (OF) senor is located.
  • the quantity c is the sensor by which x is measured, e.g., OF or wheel encoder (WE).
  • IMU body frame Locations of the IMU, OF, and WE sensors on the robot.
  • the user frame The angular velocity of the robot as [ 0 0 ⁇ z ].
  • the velocities at the location of IMU in the frame U measured by OF and W at time k.
  • Exemplary embodiments provide improved navigation of planar robots.
  • Suitable planar robots include, but are not limited to, a robot vacuum cleaner (RVC).
  • RVC robot vacuum cleaner
  • a RVC system contains two wheels, a left wheel and a right wheel, each with their own encoders and driven by two independent motors.
  • a description of how robotic vacuums work is found at https://electronics.howstuffworks.com/gadgets/home/robotic-vacuuml.htm, the entire contents of which are incorporated herein by reference.
  • the robot includes a left wheel 102 containing a left wheel encoder (WE) sensor and a right wheel 104 containing a right wheel (WE) encoder sensor.
  • the robot also includes an optical flow (OF) sensor 106 and an IMU 108.
  • the IMU 108 contains an accelerometer and a gyroscope and measures the acceleration 110 at the location of the IMU, the angular velocity 112, and the angular position 114 or quaternion q k which is the rotation that rotates the IMU frame 116 to the user frame 118 or a fixed frame with z-axis 120 extending in the direction of gravity.
  • I and U stands for the IMU frame and the user frame.
  • the left subscript i indicates the quantity is associated with the location of the IMU 108, and the right superscript i indicates the quantity is measured by the IMU 108.
  • the discrete time is indicated by k.
  • the OF sensor 106 measures the pixel motion in the OF body frame, which is the relative distance that the sensor moves between two consecutive register readings. When sampled with a constant period, the OF sensor 106 can be treated as a velocity sensor. The OF sensor 106, which is calibrated, measures the velocity 122 at the location of the OF sensor 106, which is denoted The displacement from the OF sensor 106 to the IMU 108 in the IMU frame is denoted which is constant in the IMU frame.
  • the left and right WE sensors measure the velocities at the locations of the left wheel 126 and the right wheel 128 in the IMU frame, respectively, which are denoted as On a planar robot, these two wheels can only move forward or backward.
  • the average is the robot velocity 130 at the middle w 132 between two wheel centers, which is denoted as
  • the displacement 134 from the location w to the IMU 108 in the IMU frame is
  • the robot 100 also includes at least one processor 136 for performing the various calculations and implementing instructions associated with the embodiments described herein.
  • An optical flow sensor measures changes in position by optically acquiring sequential surface images (frames) and mathematically determining the speed and the direction of motion, employing a low resolution, high frame rate camera to capture images of a surface. Consecutive images are fed to an image processing algorithm that produces delta X and Y, the change in position during the intervening interval.
  • a familiar application of an optical flow sensor is in a computer mouse, where the image sensor and processing algorithm are incorporated into a single chip.
  • a suitable OF sensor 106 for some embodiments is the PixArt Imaging PAA 5101 that has both light amplification by stimulated emission of radiation (LASER) and light emitting diode (LED) light sources.
  • LASER stimulated emission of radiation
  • LED light emitting diode
  • LASER stimulated emission of radiation
  • the Image Quality value is high when tracking on tiles and is low when tracking on carpets.
  • LED light emitting diode
  • the Image Quality is high when tracking on carpets and is low when tracking on tiles.
  • the sensor operates by measuring optical flow, surfaces which cause the patterns of light intensity to behave differently will result in inaccurate measurements. Therefore, specular reflection from the surface will impact the measurements as described in Ronald P. M.
  • the accuracy of the WE measurements are terrain surface dependent and are subject to many sources of errors as described in "Introduction to Autonomous Mobile Robots", R. Siegwart and I. Nourbakhsh, MIT Press (2004). These sources of errors include limited resolution during integration (time increments, measurement resolution, etc.), misalignment of the wheels (deterministic), unequal wheel diameter (deterministic), variation in the contact point of the wheel, unequal floor contact (slipping, non-planar surface, etc.), and robot collision and bump induced error. These errors adversely affect the reliability of the WE sensor measurements. These deterministic errors can be significantly reduced by dynamic wheel encoder calibrations. For the other errors, accuracy needs to be estimated for sensor fusion.
  • An IMU 108 provides the orientation and linear acceleration of the robot.
  • the heading changes of a robot can be derived from the orientation.
  • the linear acceleration usually contains the zero-gravity-offset (ZGO) and gravity residuals. Consequently, the direct integration of the linear acceleration suffers from severe integration errors.
  • ZGO zero-gravity-offset
  • the accuracy of the IMU's linear acceleration is not subject to variations in terrains and surface textures. In other words, the linear acceleration measurements are always consistent with robot motion.
  • the linear acceleration derived from the IMU 108 is used to detect OF and WE sensor failures and assess their accuracies.
  • a method for estimating the current position of a robot 200 with good accuracy fuses the velocity measurements from a plurality of robot sensors and uses the fused velocity in a Kalman filter to estimate the current position.
  • the current positions define the robot trajectory.
  • Suitable robots include, but are not limited to, planar robots.
  • the robot is an RVC.
  • each robot sensor in the plurality of robot sensors is calibrated.
  • a given robot sensor is calibrated at step 202 using nominal calibration methods provided by the original equipment manufacturer (OEM) of that given robot sensor.
  • OEM original equipment manufacturer
  • a given robot sensor is calibrated using an in-house advanced calibration method.
  • Suitable robot sensors include OF sensors and WE sensors as described above.
  • the calibrated robot sensors are time aligned at step 204.
  • a robot velocity measurement is obtained from each robot sensor at step 206.
  • an OF sensor velocity measurement is obtained, and at least one WE sensor velocity measurement is obtained.
  • two separate WE sensor velocity measurements are obtained, and the two separate WE sensor velocity measurements are averaged to generate a composite WE sensor velocity measurement.
  • the composite WE sensor velocity measurement is associated with the center of the axes between the left and right wheels, most of, which is also a center of the body of the robot.
  • the robot velocity measurements from the plurality of robot sensors are then fused or combined at step 208.
  • the combination is a weighted combination or weighted sum of the plurality of robot velocity measurements.
  • the fused robot velocity measurements are then processed by a Kalman filter at step 210, i.e., subjected to Kalman filtering, to estimate a current robot location at step 212. Collection and fusion of the robot velocity measurements and processing of the fused velocity measurements with Kalman filtering can be repeated iteratively. This generates a history of current robot locations that can be used to determine a robot trajectory at step 214 over a given time period.
  • the fused robot velocity measurements are processed by the Kalman filter in combination with the linear acceleration obtained from the IMU 108 at step 216 and expressed in the user frame of reference and robot dynamics at step 218.
  • the robot velocity measurements are converted, transformed or translated from the robot velocity measurements at the location of each robot sensor to robot velocity measurements at a common location within the robot at step 220.
  • the robot velocity measurements are converted to an arbitrary common location.
  • each robot velocity measurement is converted from the location of each robot sensor to a corresponding robot velocity measurement at the location of the IMU within the robot. Therefore, each robot velocity measurement is translated from a robot sensor location frame of reference to the IMU location frame of reference.
  • the robot velocity measurements i.e., the OF sensor velocity measurement and the WE sensors velocity measurements are converted to the location of the IMU using the rigid body conversion equation.
  • the corresponding velocities at IMU computed from the OF sensor and WE sensors are shown in equations (2) and (3): where and are the velocities from the OF sensor and WE at their own locations in the IMU frame, respectively, and and are the corresponding velocities at the location of IMU.
  • the resulting robot velocity measurements in the IMU frame of reference are then converted to corresponding robot velocity measurements in a user frame of reference at step 222.
  • the user frame of reference is a frame of reference associated with a user of the robot.
  • the quaternion or rotation matrix from the IMU provides the rotation from IMU frame to the user frame or a fixed coordinate frame.
  • the velocities and at the location of IMU is converted into the following user coordinates as shown in equations (4) and (5):
  • measurements from the IMU 108 are translated to the user frame of reference.
  • the linear acceleration from the IMU 108 in the user frame of reference is computed as follows as shown in equation (6):
  • the accuracy of sensor measurements from the plurality of robot sensors are dependent upon the running or operating conditions of the robot 100.
  • the accuracy of one type of robot sensor under given running or operating conditions can be different than another type of robot sensor under the same running or operating conditions. Therefore, certain types of robot sensors will have greater accuracy than other types of robot sensors under common operating conditions. Therefore, the accuracy of each sensor measurement from each robot sensor is determined based on the running conditions at step 224, and this accuracy is used in determining how to combine or to fuse the robot velocity measurements for use in Kalman filtering.
  • IMU linear acceleration measurements in a feature space are used to access or to determine the accuracies of the plurality of robot velocity measurements reliably.
  • Weights to be applied to the robot velocity measurements from each robot sensor, the OF sensor and the WE sensors are computed with respect to the accuracies associated with those velocity measurements.
  • a robot velocity is derived from the linear acceleration of the robot that is obtained from the IMU.
  • Features are defined for the purpose of comparing the velocities obtained from the robot sensors to velocities calculated from the IMU derived linear acceleration.
  • a feature is generated by a DC-Block filter which transforms a velocity signal, wherein the velocity signal is generated by one of the optical flow sensor, the wheel encoder or the inertial measurement unit.
  • Comparable features from the linear acceleration derived robot velocity and the robot velocity measurements from the plurality of robot sensors are computed and compared for different domains, e.g., a velocity domain and a position domain at step 226. Differences between the comparable features are identified between the IMU and each robot sensor at step 228. These differences express the accuracy of the velocity measurements from each robot sensor and are used to weight the contribution of each robot velocity measurement in the fusion of velocity measurements at step 230. Therefore, fusing the robot velocity measurements uses a weighted sum of the robot velocity measurements of each robot sensor based on the accuracy of each robot sensor under current running or operating conditions of the robot.
  • the linear acceleration from the IMU is immune to errors induced by the running conditions that adversely affect the OF sensor and WE sensors
  • the linear acceleration is used to assess the accuracies of and Expressing the true velocity at the location of the IMU as at instant k, yields the following relationships between the velocities obtained from the OF sensor and WE sensors and the true velocity as shown in equation (7): where and represent the sensor noises.
  • the velocity derived from the linear acceleration is expressed by the following equation (8):
  • the calculated velocity is close to the true velocity, thus, and with expressing the effect of sensor errors. Stated another way, the velocity derived from the linear acceleration and are comparable or similar without running condition induced errors. The same goes for and However, usually contains ZGO and gravity residuals. Integration of accumulates these errors quickly, making the calculated velocity far different from the true velocity and invalidating the above assumptions.
  • a DC-block filter is applied before and after each integration operation so that only the signal variation is kept.
  • a DC-block filter is a recursive filter specified by the difference equation (9):
  • the DC- block filter and appropriate conversions are applied to the velocity measurements of the OF sensor, the WE sensors and the IMU.
  • This converts the sensor measurements into transformed signals in a transform domain.
  • the transformed signal is referred to as a feature of the original signal, i.e., the original velocity measurement.
  • the transformed signals, i.e., features are comparable, and the differences between the features are used to assess sensor measurements accuracy.
  • the sensor measurements are converted for multiple transform domains, e.g., velocity domain and position domain.
  • a well-designed DC-block filter is used to compute the features of the original signals, i.e., the original sensor measurements, in the velocity domain and position domain that are used to compare the IMU linear acceleration derived velocity with each robot velocity measurement from each robot sensor.
  • the DC-blocked filter is applied to the linear velocity that is derived from the IMU linear acceleration The resulting feature is shown in equation (10):
  • the innermost removes the DC component in gravity residual.
  • the outside removes the accumulated error due to the integration.
  • the left subscript v in indicates the feature is in velocity domain.
  • the left subscript p in indicates the feature is in position domain.
  • a correlation coefficient between the computed features is calculated and is used to determine the closeness or similarity between the IMU measurements and each robot sensor measurement, i.e., between OF measurements and IMU measurements and between WE measurements and IMU measurements.
  • the similarity metrics are computed between the k 0 + 1 feature samples within the most recent t 0 seconds as shown in equations (19) and (20): where is the sample sequence of feature from to k reshaped as a vector, and is the sample sequence of feature from to k reshaped as a vector .
  • a threshold e.g., 0.5
  • OF sensor measurements suffer from running condition errors, e.g. image glitches, and when is high enough or beyond a threshold, e.g., 0.95, OF sensor measurements have acceptable accuracy.
  • WE measurements suffer from running condition errors, and when is high enough beyond a threshold, e.g., 0.95, WE measurements have acceptable accuracy. Therefore, when is high enough or beyond a threshold, e.g., 0.95, while is low enough or below a threshold, e.g., 0.5, the fused velocity is That is, the weight assigned to the WE sensor velocity measurement is zero, and the weight assigned to the OF sensor velocity measurement is 1.
  • the fused velocity should be That is, the weight assigned to the OF sensor velocity measurement is zero, and the weight assigned to the WE sensor velocity measurement is 1.
  • the fused velocity is calculated as a weighted sum of and with the weights used in the weighted sum determined by the following procedure.
  • n k and n k are the measurement noises and gaussian distributed with distributions of respectively.
  • the likelihood of given is defined as and the likelihood of given is defined as With the initial probabilities of 0-5, the weights by Bayesian formula are shown in equations (23) and (24):
  • the likelihood ' s approximated by the most recent N+l samples, which are obtained within most recent 0.2 or 0.3 seconds, of the linear acceleration samples and the Nth previous measurement as shown below in equation (26):
  • features in the velocity domain are used to compute the weights. As defined in equations (32)-(34):
  • the likelihood is approximately by defining and are most recent N+l features, which are the samples within the most recent 0.2 or
  • the robot is assumed to be running at constant acceleration. Its state is where x k and y k are the position at instant k.
  • Fand H are shown below in equations (40) and (41), respectively: and w k and n k are the dynamic process noise and measurement noise. It is assumed that w k is of Gaussian distribution with zero mean and covariance of Q k and that n k is of Gaussian distribution with zero mean and covariance of R k . T is the sampling period.
  • the computed robot position is a
  • Figure 4 The velocity fusion and position estimation described herein are summarized in Figure 4. Further, Figure 4 can be mapped to Figure 2 for many of the steps, i.e., in some steps, Figure 4 shows additionally details, for example, various equations described herein and specific inputs, for certain steps described in Figure 2. Therefore, the interested reader is encouraged to compare Figure 2 with Figure 4 for ease of understanding.
  • flowchart 400 includes inputs from the optical flow 402, the IMU 404 and the wheel encoder 406.
  • Blocks 408 and 410 represent processing the various inputs shown to provide velocity outputs associated with the WE 406 and the OF 402, respectively.
  • the velocity outputs from blocks 408 and 410 are then merged with information from the IMU 404 as shown in step 412 described as rotate to user frame by R. Integrate to velocity occurs in step 414.
  • the DC-block filter is applied before and after each integration operation so that only the signal variation is kept as shown in step 416.
  • Computing feature difference occurs in step 418, followed by the computing weights in step 420. Fusion of the velocities of the WE 406 and OF 402 occurs in step 424.
  • Block 424 represents inputting the robot dynamics and linear acceleration, with the output being sent to the Kalman filter for filtering as shown in step 426.
  • FIG. 500 there is a flowchart 500 of a method for estimating a trajectory of a robot as shown in Figure 5.
  • the method includes: in step 502, fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and in step 504, applying Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
  • Embodiments describe fusing the linear acceleration from the IMU 404, the velocity measurements from the OF sensor 402 and the measurements from the WE sensor 406 to estimate the robot position.
  • the linear acceleration from the IMU is immune to the running condition induced measurement errors that OF sensor 402 and WE sensor 406 are prone to, e.g., slips of the encoder wheel or low quality caused by the floor texture.
  • the linear acceleration are not directly used to assess the accuracies of the OF sensor 402 and WE sensor 406, after converting their information into feature domains, the similarity metric can be used to reject the bad OF and/or WE measurements and the derived weights can be used to fuse the OF and WE measurements.
  • the robot position is estimated via Kalman filter. The embodiments described herein in a real system exhibit good performance, robustness and are computationally efficient.
  • Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by one or more processors executing sequences of instructions contained in a memory device. Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention.
  • Such software may run on a processor which is housed within the device, e.g., a robot or other device, which contains the sensors or the software may run on a processor or computer housed within another device, e.g., a system controller, a game console, a personal computer, etc., which is in communication with the device containing the sensors.
  • data may be transferred via wireline or wirelessly between the device containing the sensors and the device containing the processor which runs the software which performs the bias estimation and compensation as described above.
  • some of the processing described above may be performed in the device containing the sensors, while the remainder of the processing is performed in a second device after receipt of the partially processed data from the device containing the sensors.
  • non-transitory computer-readable storage media include, but are not limited to, a read only memory (ROM), random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • ROM read only memory
  • RAM random access memory
  • register cache memory
  • semiconductor memory devices magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
  • processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit ("CPU") and memory.
  • CPU Central Processing Unit
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • FIG. 1 A block diagram illustrating an exemplary computing system
  • processors may contain at least one Central Processing Unit ("CPU") and memory.
  • CPU Central Processing Unit
  • IMU 108 could include a processor.
  • the processor/processing unit could be located where desired within the robot 100 to perform the various calculations, estimations, image processing and the like described herein.
  • an electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals.
  • the memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the representative embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the provided methods.
  • the data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory (“RAM”)) or non-volatile (e.g., Read-Only Memory (“ROM”)) mass storage system readable by the CPU.
  • the computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It is understood that the representative embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods.
  • any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium.
  • the computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.
  • Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs); Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.
  • DSP digital signal processor
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • DSPs digital signal processors
  • a signal bearing medium examples include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc., and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc.
  • a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • any two components so associated may also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated may also be viewed as being “operably couplable” to each other to achieve the desired functionality.
  • operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
  • the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
  • the terms “any of” followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include “any of,” “any combination of,” “any multiple of,” and/or “any combination of multiples of” the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items.
  • the term “set” or “group” is intended to include any number of items, including zero.
  • the term “number” is intended to include any number, including zero.
  • a range includes each individual member.
  • a group having 1-3 cells refers to groups having 1, 2, or 3 cells.
  • a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Navigation (AREA)
  • Electric Vacuum Cleaner (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods provide for estimating a trajectory of a robot by fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity based on the accuracy of those robot velocity measurements and applying Kalman filtering to the fused robot velocity to compute a current robot location.

Description

PLANAR ROBOTS DEAD-RECKONING WITH OPTICAL FLOW, WHEEL ENCODER AND INERTIAL MEASUREMENT UNIT
RELATED APPLICATION
[0001] The present application relates to, and claims priority from, U.S. Provisional Patent
Application No. 63/073,311 filed September 1, 2020, entitled "PLANAR ROBOTS DEADRECKONING WITH OPTICAL FLOW, WHEEL ENCODER AND INERTIAL MEASUREMENT UNIT", the disclosure of which is incorporated here by reference.
TECHNICAL FIELD
[0002] Embodiments of the subject matter disclosed herein relate to navigation in planar robots.
BACKGROUND
[0003] Planar robots move on planes and are utilized in applications such as robot vacuum cleaners. Dead-reckoning navigation and simultaneous localization and mapping (SLAM) in planar robots utilize an inertial measurement unit (IMU), an optical flow (OF) sensor and wheel encoders (WE). The IMU, which typically contains an accelerometer and a gyroscope, provides the relative heading and linear acceleration of the planar robot. The OF sensor and WE provide the speed of the planar robot. Dead-reckoning is used to compute the trajectory of the planar robot by updating a current position estimate with the current speed and the previous position of the planar robot.
[0004] Although the OF sensor and WE can provide the speed of the planar robot, the measurements provided by the OF sensor and WE are prone to errors and failures. For example, an OF sensor can fail completely if the image quality of the OF is too low. In addition, wheel slips are a common cause of measurement errors for WE. For dead-reckoning navigation, accuracy of the speed estimates determines the position error growth rate; therefore, fusing sensors and obtaining accurate speed estimates are essential for navigating across heterogenous surfaces (e.g., rugs, tile, wood). Accordingly, there is still room for improvement in the navigation of planar robots.
SUMMARY
[0005] Exemplary embodiments are directed to systems and methods that provide for navigation in planar robots. The linear acceleration from IMU and speeds from the OF sensor and WE are fused to achieve a robust and accurate dead-reckoning position estimate. Bad sensor measurements are rejected, or both OF sensor and WE measurements are combined properly with respect to their accuracies. Measurements from the OF sensor and WE may not be consistent with robot motion, and the errors in the OF sensor and WE depend on running conditions. However, the linear acceleration of the IMU is always consistent with robot motion. In addition, although the speed derived from the linear acceleration suffers from a quickly growing integration error, this derived speed becomes comparable with the speeds measured by the OF sensor and WE without motion inconsistency errors in a feature domain through signal processing. Moreover, the weight of sensor measurements can be derived with respect to their accuracies.
[0006] According to an embodiment there is a method for estimating a trajectory of a robot. The method comprising: fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and applying Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
[0007] According to an embodiment there is a robot configured for estimating a trajectory of a robot. The robot comprising: at least one processor configured to fuse a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and the at least one processor configured to apply Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The accompanying drawings illustrate exemplary embodiments, wherein:
[0009] FIG. 1 is a schematic representation of a robot according to an embodiment;
[0010] FIG. 2 is flow chart illustrating a method for estimating a location of a robot according to an embodiment;
[0011] FIG. 3 is a graph of DC-block filter frequency responses according to an embodiment;
[0012] FIG. 4 is a representation of velocity fusion and location and trajectory estimation according to an embodiment; and
[0013] FIG 5 is a flowchart of a method for estimating a trajectory of a robot according to an embodiment.
DETAILED DESCRIPTION
[0014] The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
[0015] As used herein, a variable displayed in bold text denotes a matrix or vector. Whether the variable is a matrix or vector can be inferred from the context. For a variable jjx , x stands for the quantity of interest, e.g., linear acceleration or velocity. The quantity A is the frame or coordinate in which x is, e.g., user frame or inertial measurement unit (IMU) frame, and the quantity b is the location with which x is associated, e.g., where the IMU or optical flow (OF) senor is located. The quantity c is the sensor by which x is measured, e.g., OF or wheel encoder (WE).
[0016] In general, the following symbols and variables are used throughout this specification: IMU body frame. Locations of the IMU, OF, and WE sensors on the robot. The user frame. The angular velocity of the robot as [ 0 0 ωz].
The displacement from OF to IMU in the IMU frame.
The displacement from WE to IMU in the IMU frame.
The acceleration at the location of IMU in the user frame U, measured by IMU at time k.
The velocities at the location of IMU in the frame U, measured by OF and W at time k. The true velocity at the location of IMU in the user frame U at time k.
The fused velocity at the location of IMU in the user frame U at time
K. The features computed from in velocity domain.
Figure imgf000006_0002
The Gaussian distribution of x with mean m and covariance Φ.
Figure imgf000006_0001
[0017] Exemplary embodiments provide improved navigation of planar robots. Suitable planar robots include, but are not limited to, a robot vacuum cleaner (RVC). In general, a RVC system contains two wheels, a left wheel and a right wheel, each with their own encoders and driven by two independent motors. A description of how robotic vacuums work is found at https://electronics.howstuffworks.com/gadgets/home/robotic-vacuuml.htm, the entire contents of which are incorporated herein by reference.
[0018] Referring initially to Fig. 1, an embodiment of a robot 100 is illustrated. The robot includes a left wheel 102 containing a left wheel encoder (WE) sensor and a right wheel 104 containing a right wheel (WE) encoder sensor. The robot also includes an optical flow (OF) sensor 106 and an IMU 108. The IMU 108 contains an accelerometer and a gyroscope and measures the acceleration 110 at the location of the IMU, the angular velocity 112, and the angular
Figure imgf000007_0001
position 114 or quaternion qk which is the rotation that rotates the IMU frame 116 to the user
Figure imgf000007_0002
frame 118 or a fixed frame with z-axis 120 extending in the direction of gravity. For a planar robot, it is also the angular position of the robot. In and I and U stands for the IMU frame and
Figure imgf000007_0003
Figure imgf000007_0004
the user frame. The left subscript i indicates the quantity is associated with the location of the IMU 108, and the right superscript i indicates the quantity is measured by the IMU 108. The discrete time is indicated by k.
[0019] The OF sensor 106 measures the pixel motion in the OF body frame, which is the relative distance that the sensor moves between two consecutive register readings. When sampled with a constant period, the OF sensor 106 can be treated as a velocity sensor. The OF sensor 106, which is calibrated, measures the velocity 122 at the location of the OF sensor 106, which is denoted The displacement from the OF sensor 106 to the IMU 108 in the IMU frame is denoted
Figure imgf000007_0005
Figure imgf000007_0006
which is constant in the IMU frame.
[0020] After calibration, the left and right WE sensors measure the velocities at the locations of the left wheel 126 and the right wheel 128 in the IMU frame, respectively, which are denoted as On a planar robot, these two wheels can only move forward or backward. The average
Figure imgf000007_0007
is the robot velocity 130 at the middle w 132 between two wheel centers, which is
Figure imgf000007_0008
denoted as The displacement 134 from the location w to the IMU 108 in the IMU frame is
Figure imgf000008_0001
The robot 100 also includes at least one processor 136 for performing the various calculations
Figure imgf000008_0002
and implementing instructions associated with the embodiments described herein.
[0021] An optical flow sensor measures changes in position by optically acquiring sequential surface images (frames) and mathematically determining the speed and the direction of motion, employing a low resolution, high frame rate camera to capture images of a surface. Consecutive images are fed to an image processing algorithm that produces delta X and Y, the change in position during the intervening interval. A familiar application of an optical flow sensor is in a computer mouse, where the image sensor and processing algorithm are incorporated into a single chip.
[0022] The accuracy of the OF measurements are light source and surface texture dependent. A suitable OF sensor 106 for some embodiments is the PixArt Imaging PAA 5101 that has both light amplification by stimulated emission of radiation (LASER) and light emitting diode (LED) light sources. When LASER is chosen for illumination, the Image Quality value is high when tracking on tiles and is low when tracking on carpets. On the contrary, when LED is chosen, the Image Quality is high when tracking on carpets and is low when tracking on tiles. Because the sensor operates by measuring optical flow, surfaces which cause the patterns of light intensity to behave differently will result in inaccurate measurements. Therefore, specular reflection from the surface will impact the measurements as described in Ronald P. M. Chan et al., "Characterization of Low-cost Optical Flow Sensors", Proceeding of Australasian Conference on Robotics and Automation, Brisbane, Australia, (2010) and "Outdoor downward-facing optical flow odometry with commodity sensors." In: Howard A, lagnemma K, Kelly A (eds) Field and service robotics, Springer tracts in advanced robotics, vol 62, Springer, Berlin, p 183-193. In order to provide for the fusion of measurements from different sensors, the failures of the OF sensor need to be reliably detected, and the accuracy of the OF sensor needs to be assessed when the OF sensor is running on different surfaces. [0023] Regarding the accuracy of the WE, a robot is usually equipped with wheel odometry to measure its moving distance. The accuracy of the WE measurements are terrain surface dependent and are subject to many sources of errors as described in "Introduction to Autonomous Mobile Robots", R. Siegwart and I. Nourbakhsh, MIT Press (2004). These sources of errors include limited resolution during integration (time increments, measurement resolution, etc.), misalignment of the wheels (deterministic), unequal wheel diameter (deterministic), variation in the contact point of the wheel, unequal floor contact (slipping, non-planar surface, etc.), and robot collision and bump induced error. These errors adversely affect the reliability of the WE sensor measurements. These deterministic errors can be significantly reduced by dynamic wheel encoder calibrations. For the other errors, accuracy needs to be estimated for sensor fusion.
[0024] An IMU 108 provides the orientation and linear acceleration of the robot. The heading changes of a robot can be derived from the orientation. The linear acceleration usually contains the zero-gravity-offset (ZGO) and gravity residuals. Consequently, the direct integration of the linear acceleration suffers from severe integration errors. However, the accuracy of the IMU's linear acceleration is not subject to variations in terrains and surface textures. In other words, the linear acceleration measurements are always consistent with robot motion. After filtering and signal processing, the linear acceleration derived from the IMU 108 is used to detect OF and WE sensor failures and assess their accuracies.
[0025] Referring to Fig. 2, an exemplary embodiment of a method for estimating the current position of a robot 200 with good accuracy is illustrated. The method fuses the velocity measurements from a plurality of robot sensors and uses the fused velocity in a Kalman filter to estimate the current position. When performed over time, the current positions define the robot trajectory. Suitable robots include, but are not limited to, planar robots. In one embodiment, the robot is an RVC. Initially, each robot sensor in the plurality of robot sensors is calibrated. In one embodiment, a given robot sensor is calibrated at step 202 using nominal calibration methods provided by the original equipment manufacturer (OEM) of that given robot sensor. Alternatively, a given robot sensor is calibrated using an in-house advanced calibration method. Suitable robot sensors include OF sensors and WE sensors as described above. The calibrated robot sensors are time aligned at step 204. A robot velocity measurement is obtained from each robot sensor at step 206. In one embodiment, an OF sensor velocity measurement is obtained, and at least one WE sensor velocity measurement is obtained. In one embodiment, two separate WE sensor velocity measurements are obtained, and the two separate WE sensor velocity measurements are averaged to generate a composite WE sensor velocity measurement. The composite WE sensor velocity measurement is associated with the center of the axes between the left and right wheels, most of, which is also a center of the body of the robot.
[0026] The robot velocity measurements from the plurality of robot sensors are then fused or combined at step 208. In one embodiment, the combination is a weighted combination or weighted sum of the plurality of robot velocity measurements. The fused robot velocity measurements are then processed by a Kalman filter at step 210, i.e., subjected to Kalman filtering, to estimate a current robot location at step 212. Collection and fusion of the robot velocity measurements and processing of the fused velocity measurements with Kalman filtering can be repeated iteratively. This generates a history of current robot locations that can be used to determine a robot trajectory at step 214 over a given time period. In one embodiment the fused robot velocity measurements are processed by the Kalman filter in combination with the linear acceleration obtained from the IMU 108 at step 216 and expressed in the user frame of reference and robot dynamics at step 218.
[0027] Since each robot sensor measures quantities at the physical location of that robot sensor and the IMU 108 measures the linear acceleration at the location of the IMU, in one embodiment, the robot velocity measurements, e.g., the OF sensor velocity measurement and the WE sensor velocity measurements, are converted, transformed or translated from the robot velocity measurements at the location of each robot sensor to robot velocity measurements at a common location within the robot at step 220. In one embodiment, the robot velocity measurements are converted to an arbitrary common location. In one embodiment, each robot velocity measurement is converted from the location of each robot sensor to a corresponding robot velocity measurement at the location of the IMU within the robot. Therefore, each robot velocity measurement is translated from a robot sensor location frame of reference to the IMU location frame of reference.
[0028] Different conversion choices for converting the robot velocity measurements will introduce different conversion errors. In one embodiment, all quantities from the robot velocity measurements are converted to the location of IMU, i.e., the OF sensor velocity measurement and the WE sensors velocity measurements are converted to the location of IMU. On a rigid body, the velocity at a second location, b, is computed from the velocity at a first location, a, using the rigid body angular velocity and displacement from a to b. This is expressed in the following rigid body conversion equation (1):
Figure imgf000011_0001
where and are the velocities at the location b and location a, respectively, ω is the angular
Figure imgf000011_0002
velocity of the rigid body, and is the displacement from a to b in the body frame. The velocity
Figure imgf000011_0004
conversion equation holds in the user frame.
[0029] In one embodiment, the robot velocity measurements, i.e., the OF sensor velocity measurement and the WE sensors velocity measurements are converted to the location of the IMU using the rigid body conversion equation. The corresponding velocities at IMU computed from the OF sensor and WE sensors are shown in equations (2) and (3):
Figure imgf000011_0003
where and are the velocities from the OF sensor and WE at their own locations in the IMU frame, respectively, and and are the corresponding velocities at the location of IMU.
Figure imgf000011_0005
Figure imgf000011_0006
[0030] The resulting robot velocity measurements in the IMU frame of reference are then converted to corresponding robot velocity measurements in a user frame of reference at step 222. The user frame of reference is a frame of reference associated with a user of the robot. In one embodiment, the quaternion or rotation matrix from the IMU provides the rotation from IMU frame to the user frame or a fixed coordinate frame. With such a quaternion q, the velocities and
Figure imgf000012_0003
Figure imgf000012_0004
at the location of IMU is converted into the following user coordinates as shown in equations (4) and (5):
Figure imgf000012_0001
[0031] In addition, measurements from the IMU 108 are translated to the user frame of reference. In one embodiment, the linear acceleration from the IMU 108 in the user frame of reference is computed as follows as shown in equation (6):
Figure imgf000012_0002
[0032] The accuracy of sensor measurements from the plurality of robot sensors, e.g., OF sensors and WE sensors, are dependent upon the running or operating conditions of the robot 100. The accuracy of one type of robot sensor under given running or operating conditions can be different than another type of robot sensor under the same running or operating conditions. Therefore, certain types of robot sensors will have greater accuracy than other types of robot sensors under common operating conditions. Therefore, the accuracy of each sensor measurement from each robot sensor is determined based on the running conditions at step 224, and this accuracy is used in determining how to combine or to fuse the robot velocity measurements for use in Kalman filtering. [0033] Given that IMU measurements are immune or agnostic to the running conditions of robot, in one embodiment, IMU linear acceleration measurements in a feature space are used to access or to determine the accuracies of the plurality of robot velocity measurements reliably.
Weights to be applied to the robot velocity measurements from each robot sensor, the OF sensor and the WE sensors, are computed with respect to the accuracies associated with those velocity measurements. In addition to the robot velocity measurements obtained from the robot sensors, a robot velocity is derived from the linear acceleration of the robot that is obtained from the IMU. Features are defined for the purpose of comparing the velocities obtained from the robot sensors to velocities calculated from the IMU derived linear acceleration. According to some embodiments, a feature is generated by a DC-Block filter which transforms a velocity signal, wherein the velocity signal is generated by one of the optical flow sensor, the wheel encoder or the inertial measurement unit.
[0034] Comparable features from the linear acceleration derived robot velocity and the robot velocity measurements from the plurality of robot sensors are computed and compared for different domains, e.g., a velocity domain and a position domain at step 226. Differences between the comparable features are identified between the IMU and each robot sensor at step 228. These differences express the accuracy of the velocity measurements from each robot sensor and are used to weight the contribution of each robot velocity measurement in the fusion of velocity measurements at step 230. Therefore, fusing the robot velocity measurements uses a weighted sum of the robot velocity measurements of each robot sensor based on the accuracy of each robot sensor under current running or operating conditions of the robot.
[0035] Since the linear acceleration from the IMU is immune to errors induced by the running conditions that adversely affect the OF sensor and WE sensors, the linear acceleration is used to assess the accuracies of and Expressing the true velocity at the location of the
Figure imgf000013_0002
Figure imgf000013_0003
IMU as at instant k, yields the following relationships between the velocities obtained from the
Figure imgf000013_0004
OF sensor and WE sensors and the true velocity as shown in equation (7):
Figure imgf000013_0001
where and represent the sensor noises. Moreover, if is ideal without errors, the velocity
Figure imgf000014_0001
derived from the linear acceleration is expressed by the following equation (8):
Figure imgf000014_0002
The calculated velocity, is close to the true velocity, Thus, and
Figure imgf000014_0003
Figure imgf000014_0004
Figure imgf000014_0005
with expressing the effect of sensor errors. Stated another way, the velocity derived from the
Figure imgf000014_0011
linear acceleration
Figure imgf000014_0006
and
Figure imgf000014_0007
are comparable or similar without running condition induced errors. The same goes for and However, usually contains ZGO and gravity residuals.
Figure imgf000014_0008
Figure imgf000014_0009
Figure imgf000014_0010
Integration of accumulates these errors quickly, making the calculated velocity far different from the true velocity and invalidating the above assumptions.
[0036] To overcome the integration divergence due to the ZGO and gravity residual contained in the acceleration, a DC-block filter is applied before and after each integration operation so that only the signal variation is kept. A DC-block filter is a recursive filter specified by the difference equation (9):
Figure imgf000014_0012
[0037] Referring to Fig. 3, a graph of frequency responses 300 when a = 0.5 306, a = 0.8 304, and a = 0.95 302 is illustrated.
[0038] To obtain the comparable features between the velocity derived from the IMU linear acceleration, the OF sensor velocity measurement and the WE sensor velocity measurement, the DC- block filter and appropriate conversions are applied to the velocity measurements of the OF sensor, the WE sensors and the IMU. This converts the sensor measurements into transformed signals in a transform domain. The transformed signal is referred to as a feature of the original signal, i.e., the original velocity measurement. The transformed signals, i.e., features, are comparable, and the differences between the features are used to assess sensor measurements accuracy. In one embodiment, the sensor measurements are converted for multiple transform domains, e.g., velocity domain and position domain.
[0039] A well-designed DC-block filter is used to compute the features of the
Figure imgf000015_0005
original signals, i.e., the original sensor measurements, in the velocity domain and position domain that are used to compare the IMU linear acceleration derived velocity with each robot velocity measurement from each robot sensor. First, the DC-blocked filter is applied to the linear velocity that is derived from the IMU linear acceleration The resulting feature is shown in equation
Figure imgf000015_0006
Figure imgf000015_0007
(10):
Figure imgf000015_0001
[0040] The innermost removes the DC component in gravity residual. The
Figure imgf000015_0011
Figure imgf000015_0003
outside removes the accumulated error due to the integration. The left subscript v in
Figure imgf000015_0004
Figure imgf000015_0010
indicates the feature is in velocity domain.
[0041] The same DC-block filter is applied to and yielding equation (11):
Figure imgf000015_0002
[0042] After applying the double DC-block filter, the DC component and the first order variation component embedded in the original signal are filtered out, and only the higher order signal variations are kept. The error caused by the ZGO and gravity residual are reduced significantly. The following relations hold for real data as shown in equations (12) and (13):
Figure imgf000015_0008
[0043] These relations bridge the gap between theoretical formulation and practical implementation. To further reduce high frequency sensor noise, the velocities are also converted in the position domain and the DC-block filter is applied as shown in equations (14) - (16):
Figure imgf000015_0009
Figure imgf000016_0001
Then, as shown in equations (17) and (18):
Figure imgf000016_0002
The left subscript p in indicates the feature is in position domain.
Figure imgf000016_0003
[0044] In assessing the feature differences, a correlation coefficient between the computed features is calculated and is used to determine the closeness or similarity between the IMU measurements and each robot sensor measurement, i.e., between OF measurements and IMU measurements and between WE measurements and IMU measurements. The similarity metrics are computed between the k0 + 1 feature samples within the most recent t0 seconds as shown in equations (19) and (20): where is the sample sequence of feature from to k reshaped as a vector, and is the sample sequence of feature from to k reshaped as a vector
Figure imgf000016_0009
.
[0045] When is below a threshold, e.g., 0.5, OF sensor measurements suffer from
Figure imgf000016_0008
running condition errors, e.g. image glitches, and when is high enough or beyond a threshold,
Figure imgf000016_0004
e.g., 0.95, OF sensor measurements have acceptable accuracy. Similarly, when is low enough
Figure imgf000016_0005
below a threshold, e.g., 0.5, WE measurements suffer from running condition errors, and when is
Figure imgf000016_0006
high enough beyond a threshold, e.g., 0.95, WE measurements have acceptable accuracy. Therefore, when is high enough or beyond a threshold, e.g., 0.95, while is low enough or below a
Figure imgf000016_0007
threshold, e.g., 0.5, the fused velocity is That is, the weight assigned to the WE sensor velocity
Figure imgf000017_0001
measurement is zero, and the weight assigned to the OF sensor velocity measurement is 1. When
Figure imgf000017_0002
is high enough or beyond a threshold, e.g., 0.95, while is low enough or below a threshold, e.g.,
Figure imgf000017_0003
0.5, the fused velocity should be That is, the weight assigned to the OF sensor velocity
Figure imgf000017_0004
measurement is zero, and the weight assigned to the WE sensor velocity measurement is 1.
Otherwise, the fused velocity is calculated as a weighted sum of and with the weights used
Figure imgf000017_0005
Figure imgf000017_0006
in the weighted sum determined by the following procedure.
[0046] To derive the weights of the velocities associated with the OF sensor and the WE sensors, the linear acceleration is assumed to be accurate without ZGO or gravity residual, and
Figure imgf000017_0018
and express only sensor measurements. Therefore, as shown in equations (21) and (22),
Figure imgf000017_0008
Figure imgf000017_0009
Figure imgf000017_0007
[0047] Assume nk and nk are the measurement noises and gaussian distributed with distributions of respectively. The likelihood of given
Figure imgf000017_0015
Figure imgf000017_0024
is defined as and the likelihood of given is defined as
Figure imgf000017_0013
Figure imgf000017_0014
Figure imgf000017_0016
Figure imgf000017_0017
With the initial probabilities of 0-5, the weights by Bayesian
Figure imgf000017_0012
Figure imgf000017_0025
formula are shown in equations (23) and (24):
Figure imgf000017_0010
Then the fused velocity is shown in equation (25):
Figure imgf000017_0011
[0048] Given the initial robot velocity and the linear acceleration sequence of
Figure imgf000017_0020
Figure imgf000017_0019
the likelihood of and and are computed.
Figure imgf000017_0022
Figure imgf000017_0023
Figure imgf000017_0021
However, due to the ZGO and gravity residual, the formulas (1) and (2) are not valid for real data. For example, an IMU 108 with 20 milligravity offset error causes about 0.196 m/s error for the speed after one second and about 1.96 m/s error for the speed after 10 seconds. Therefore, two embodiments are used to approximate the likelihoods.
[0049] In one embodiment, the likelihood 's approximated by the most
Figure imgf000018_0001
recent N+l samples, which are obtained within most recent 0.2 or 0.3 seconds, of the linear acceleration samples and the Nth previous measurement as shown below in
Figure imgf000018_0010
Figure imgf000018_0011
equation (26):
Figure imgf000018_0002
[0050] Since in a short period of time, the following approximation holds even with gravity residual and ZGO, as shown in equation (27):
Figure imgf000018_0003
the OF velocity residual is computed from as shown in equation (28):
Figure imgf000018_0004
Assuming that the OF sensor has Gaussian noise with distribution of , then the
Figure imgf000018_0009
likelihood of the residual is given and
Figure imgf000018_0005
Figure imgf000018_0006
Figure imgf000018_0007
Figure imgf000018_0008
[0051] Similarly, the WE velocity residual is computed as shown in equation (29):
Its likelihood is given and Therefore, as shown in equations
Figure imgf000018_0012
(30) and (31):
Figure imgf000018_0013
Both likelihood functions mainly depend on the common data
Figure imgf000019_0013
are assumed approximately equal or their difference is ignored.
[0052] In another embodiment, features in the velocity domain are used to compute the weights. As defined in equations (32)-(34):
Figure imgf000019_0001
[0053] If both the OF sensor and the WE sensor work properly,
Figure imgf000019_0012
Figure imgf000019_0009
It is known that where is the true velocity at IMU in the user
Figure imgf000019_0010
Figure imgf000019_0011
frame. Since, as shown in equation (35),
Figure imgf000019_0002
where nk and nk are the sensor noises, then, as shown in equation (36),
Figure imgf000019_0003
[0054] Assuming are Gaussian distributed with distributions of
Figure imgf000019_0008
respectively, given the likelihood of and the likelihood of Therefore, as shown in equation (37):
Figure imgf000019_0007
Figure imgf000019_0004
[0055] In this embodiment, the likelihood is approximately by defining
Figure imgf000019_0006
Figure imgf000019_0005
and are most recent N+l features, which are the samples within the most recent 0.2 or
Figure imgf000020_0001
0.3 seconds. The most recent N+l samples are used to make assessment more stable. Then the weights are shown below in equations (38) and (39):
Figure imgf000020_0002
Therefore, the fused robot velocity is
Figure imgf000020_0003
[0056] Having fused the velocities of the OF sensor and WE, the robot position is estimated.
For the dynamics of the robot, the robot is assumed to be running at constant acceleration. Its state is
Figure imgf000020_0004
where xk and yk are the position at instant k.
[0057] The dynamics of the robot are given by where
Figure imgf000020_0006
Fand H are shown below in equations (40) and (41), respectively:
Figure imgf000020_0005
and wk and nk are the dynamic process noise and measurement noise. It is assumed that wk is of Gaussian distribution with zero mean and covariance of Qk and that nk is of Gaussian distribution with zero mean and covariance of Rk. T is the sampling period.
[0058] The measurements are The Kalman
Figure imgf000020_0007
filter is applied to estimate position of the robot as shown in equations (43)-(48):
Figure imgf000020_0008
Figure imgf000021_0001
The computed robot position is
Figure imgf000021_0002
[0059] The velocity fusion and position estimation described herein are summarized in Figure 4. Further, Figure 4 can be mapped to Figure 2 for many of the steps, i.e., in some steps, Figure 4 shows additionally details, for example, various equations described herein and specific inputs, for certain steps described in Figure 2. Therefore, the interested reader is encouraged to compare Figure 2 with Figure 4 for ease of understanding.
[0060] Initially, flowchart 400 includes inputs from the optical flow 402, the IMU 404 and the wheel encoder 406. Blocks 408 and 410 represent processing the various inputs shown to provide velocity outputs associated with the WE 406 and the OF 402, respectively. The velocity outputs from blocks 408 and 410 are then merged with information from the IMU 404 as shown in step 412 described as rotate to user frame by R. Integrate to velocity occurs in step 414. Then the DC-block filter is applied before and after each integration operation so that only the signal variation is kept as shown in step 416. Computing feature difference occurs in step 418, followed by the computing weights in step 420. Fusion of the velocities of the WE 406 and OF 402 occurs in step 424. Block 424 represents inputting the robot dynamics and linear acceleration, with the output being sent to the Kalman filter for filtering as shown in step 426.
[0061] According to an embodiment, there is a flowchart 500 of a method for estimating a trajectory of a robot as shown in Figure 5. The method includes: in step 502, fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and in step 504, applying Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location. [0062] Embodiments describe fusing the linear acceleration from the IMU 404, the velocity measurements from the OF sensor 402 and the measurements from the WE sensor 406 to estimate the robot position. First, the linear acceleration from the IMU is immune to the running condition induced measurement errors that OF sensor 402 and WE sensor 406 are prone to, e.g., slips of the encoder wheel or low quality caused by the floor texture. Second, although the linear acceleration are not directly used to assess the accuracies of the OF sensor 402 and WE sensor 406, after converting their information into feature domains, the similarity metric can be used to reject the bad OF and/or WE measurements and the derived weights can be used to fuse the OF and WE measurements. Third, the robot position is estimated via Kalman filter. The embodiments described herein in a real system exhibit good performance, robustness and are computationally efficient.
[0063] Systems and methods for processing data according to exemplary embodiments of the present invention can be performed by one or more processors executing sequences of instructions contained in a memory device. Such instructions may be read into the memory device from other computer-readable mediums such as secondary data storage device(s). Execution of the sequences of instructions contained in the memory device causes the processor to operate, for example, as described above. In alternative embodiments, hard-wire circuitry may be used in place of or in combination with software instructions to implement the present invention. Such software may run on a processor which is housed within the device, e.g., a robot or other device, which contains the sensors or the software may run on a processor or computer housed within another device, e.g., a system controller, a game console, a personal computer, etc., which is in communication with the device containing the sensors. In such a case, data may be transferred via wireline or wirelessly between the device containing the sensors and the device containing the processor which runs the software which performs the bias estimation and compensation as described above. According to other exemplary embodiments, some of the processing described above may be performed in the device containing the sensors, while the remainder of the processing is performed in a second device after receipt of the partially processed data from the device containing the sensors.
[0064] The above-described exemplary embodiments are intended to be illustrative in all respects, rather than restrictive, of the present invention. Thus, the present invention is capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. For example, although the foregoing exemplary embodiments describe, among other things, the use of inertial sensors to detect movement of a device, other types of sensors (e.g., ultrasound, magnetic or optical) can be used instead of, or in addition to, inertial sensors in conjunction with the afore-described signal processing. All such variations and modifications are considered to be within the scope and spirit of the present invention as defined by the following claims. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items.
[0065] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer readable medium for execution by a computer or processor. Examples of non-transitory computer-readable storage media include, but are not limited to, a read only memory (ROM), random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs).
[0066] Moreover, in the embodiments described above, processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit ("CPU") and memory. In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being "executed," "computer executed" or "CPU executed." For example, the IMU 108 could include a processor. Alternatively, the processor/processing unit could be located where desired within the robot 100 to perform the various calculations, estimations, image processing and the like described herein.
[0067] One of ordinary skill in the art will appreciate that the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU. An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits. It should be understood that the representative embodiments are not limited to the above-mentioned platforms or CPUs and that other platforms and CPUs may support the provided methods.
[0068] The data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory ("RAM")) or non-volatile (e.g., Read-Only Memory ("ROM")) mass storage system readable by the CPU. The computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It is understood that the representative embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods. [0069] In an illustrative embodiment, any of the operations, processes, etc. described herein may be implemented as computer-readable instructions stored on a computer-readable medium. The computer-readable instructions may be executed by a processor of a mobile unit, a network element, and/or any other computing device.
[0070] There is little distinction left between hardware and software implementations of aspects of systems. The use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software may become significant) a design choice representing cost vs. efficiency tradeoffs. There may be various vehicles by which processes and/or systems and/or other technologies described herein may be effected (e.g., hardware, software, and/or firmware), and the preferred vehicle may vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle. If flexibility is paramount, the implementer may opt for a mainly software implementation. Alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
[0071] The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples may be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs); Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine. [0072] The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations may be made without departing from its spirit and scope, as will be apparent to those skilled in the art. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly provided as such. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is to be understood that this disclosure is not limited to particular methods or systems.
[0073] In certain representative embodiments, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), and/or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, may be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein may be distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a CD, a DVD, a digital tape, a computer memory, etc., and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
[0074] The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact many other architectures may be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality may be achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermediate components. Likewise, any two components so associated may also be viewed as being "operably connected", or "operably coupled", to each other to achieve the desired functionality, and any two components capable of being so associated may also be viewed as being "operably couplable" to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
[0075] With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0076] It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, where only one item is intended, the term "single" or similar language may be used. As an aid to understanding, the following appended claims and/or the descriptions herein may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should be interpreted to mean "at least one" or "one or more"). The same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and
B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B." Further, the terms "any of" followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include "any of," "any combination of," "any multiple of," and/or "any combination of multiples of" the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Moreover, as used herein, the term "set" or “group” is intended to include any number of items, including zero. Additionally, as used herein, the term "number" is intended to include any number, including zero. [0077] In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.
[0078] As will be understood by one skilled in the art, for any and all purposes, such as in terms of providing a written description, all ranges disclosed herein also encompass any and all possible subranges and combinations of subranges thereof. Any listed range can be easily recognized as sufficiently describing and enabling the same range being broken down into at least equal halves, thirds, quarters, fifths, tenths, etc. As a non-limiting example, each range discussed herein may be readily broken down into a lower third, middle third and upper third, etc. As will also be understood by one skilled in the art all language such as "up to," "at least," "greater than," "less than," and the like includes the number recited and refers to ranges which can be subsequently broken down into subranges as discussed above. Finally, as will be understood by one skilled in the art, a range includes each individual member. Thus, for example, a group having 1-3 cells refers to groups having 1, 2, or 3 cells. Similarly, a group having 1-5 cells refers to groups having 1, 2, 3, 4, or 5 cells, and so forth. [0079] Moreover, the claims should not be read as limited to the provided order or elements unless stated to that effect. In addition, use of the terms "means for" in any claim is intended to invoke 35 U.S.C. §112, 6 or means-plus-function claim format, and any claim without the terms
Figure imgf000030_0001
"means for" is not so intended.

Claims

WHAT IS CLAIMED IS:
1. A method for estimating a trajectory of a robot, the method comprising: fusing a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and applying Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
2. The method of claim 1, wherein fusing the plurality of robot velocity measurements comprises fusing an optical flow sensor velocity measurement and a wheel encoder sensor velocity measurement.
3. The method of claim 2, wherein fusing the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement comprises computing a weighted sum of the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement.
4. The method of claim 3, wherein fusing the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement further comprises using the inertial measurement unit linear acceleration derived velocity to determine an optical flow sensor velocity measurement weight and a wheel encoder sensor velocity measurement weight.
5. The method of claim 4, wherein using the inertial measurement unit linear acceleration derived velocity comprises: converting the optical flow sensor velocity measurement, the wheel encoder sensor velocity measurement, and the inertial measurement unit linear acceleration derived velocity into transformed signals in a transform domain, each transformed signal comprising a feature; identifying differences between an inertial measurement unit linear acceleration derived velocity feature and an optical flow sensor velocity measurement feature and the inertial measurement unit linear acceleration derived velocity feature and a wheel encoder sensor velocity measurement feature; and using the identified differences to determine an optical flow sensor velocity measurement weight and a wheel encoder sensor velocity measurement weight.
6. The method of claim 5, wherein using the identified differences comprises: assigning a value of 1 to the optical flow sensor velocity measurement weight and a value of 0 to the wheel encoder sensor velocity measurement weight when a similarity between the inertial measurement unit linear acceleration derived velocity feature and the optical flow sensor velocity measurement feature is at least 0.95 and a similarity between the inertial measurement unit linear acceleration derived velocity feature and the wheel encoder sensor velocity measurement feature is less than 0.5; and assigning a value of 0 to the optical flow sensor velocity measurement weight and a value of 1 to the wheel encoder sensor velocity measurement weight when the similarity between the inertial measurement unit linear acceleration derived velocity feature and the optical flow sensor velocity measurement feature is less than 0.5 and the similarity between the inertial measurement unit linear acceleration derived velocity feature and the wheel encoder sensor velocity measurement feature is at least 0.95.
7. The method of claim 5, wherein using the identified differences comprises using recent samples of linear acceleration extending from a given time and a previous velocity measurement at the given time to assign weights.
8. The method of claim 5, wherein using the identified differences comprises using an optical flow sensor velocity measurement feature, a wheel encoder sensor velocity measurement feature and an inertial measurement unit linear acceleration derived velocity feature in a velocity domain to assign weights.
9. The method of claim 1, wherein applying Kalman filtering further comprises using the inertial measurement unit linear acceleration, the weighted combination of the optical flow velocity and wheel encoder measurements, and robot dynamics in the Kalman filtering.
10. The method of claim 1, wherein the robot is a robot vacuum cleaner.
11. A robot configured for estimating a trajectory of a robot, the robot comprising: at least one processor configured to fuse a plurality of robot velocity measurements from a plurality of robot sensors located within a robot to generate a fused robot velocity; and the at least one processor configured to apply Kalman filtering to the fused robot velocity and measured linear acceleration from an inertial measurement unit to compute a current robot location.
12. The robot of claim 11, wherein to fuse the plurality of robot velocity measurements comprises the at least one processor fusing an optical flow sensor velocity measurement and a wheel encoder sensor velocity measurement.
13. The robot of claim 12, wherein fusing the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement comprises the at least one processor computing a weighted sum of the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement.
14. The robot of claim 13, wherein fusing the optical flow sensor velocity measurement and the wheel encoder sensor velocity measurement further comprises the at least one processor using the inertial measurement unit linear acceleration derived velocity to determine an optical flow sensor velocity measurement weight and a wheel encoder sensor velocity measurement weight.
15. The robot of claim 14, wherein using the inertial measurement unit linear acceleration derived velocity comprises: the at least one processor converting the optical flow sensor velocity measurement, the wheel encoder sensor velocity measurement, and the inertial measurement unit linear acceleration derived velocity into transformed signals in a transform domain, each transformed signal comprising a feature; the at least one processor identifying differences between an inertial measurement unit linear acceleration derived velocity feature and an optical flow sensor velocity measurement feature and the inertial measurement unit linear acceleration derived velocity feature and a wheel encoder sensor velocity measurement feature; and the at least one processor using the identified differences to determine an optical flow sensor velocity measurement weight and a wheel encoder sensor velocity measurement weight.
16. The robot of claim 15, wherein using the identified differences comprises: the at least one processor assigning a value of 1 to the optical flow sensor velocity measurement weight and a value of 0 to the wheel encoder sensor velocity measurement weight when a similarity between the inertial measurement unit linear acceleration derived velocity feature and the optical flow sensor velocity measurement feature is at least 0.95 and a similarity between the inertial measurement unit linear acceleration derived velocity feature and the wheel encoder sensor velocity measurement feature is less than 0.5; and the at least one processor assigning a value of 0 to the optical flow sensor velocity measurement weight and a value of 1 to the wheel encoder sensor velocity measurement weight when the similarity between the inertial measurement unit linear acceleration derived velocity feature and the optical flow sensor velocity measurement feature is less than 0.5 and the similarity between the inertial measurement unit linear acceleration derived velocity feature and the wheel encoder sensor velocity measurement feature is at least 0.95.
17. The robot of claim 15, wherein using the identified differences comprises the at least one processor using recent samples of linear acceleration extending from a given time and a previous velocity measurement at the given time to assign weights.
18. The robot of claim 15, wherein using the identified differences comprises the at least one processor using an optical flow sensor velocity measurement feature, a wheel encoder sensor velocity measurement feature and an inertial measurement unit linear acceleration derived velocity feature in a velocity domain to assign weights.
19. The robot of claim 11, wherein applying Kalman filtering further comprises using the inertial measurement unit linear acceleration, the weighted combination of the optical flow velocity and wheel encoder measurements, and robot dynamics in the Kalman filtering.
20. The robot of claim 11, wherein the robot is a robot vacuum cleaner.
21. The method or robot of any of claims 5, 6, 8, 15, 16, or 18, wherein the feature is generated by a DC-Block filter which transforms a velocity signal, wherein the velocity signal is generated by one or more of the optical flow sensor, the wheel encoder or the inertial measurement unit.
PCT/US2021/048365 2020-09-01 2021-08-31 Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit WO2022051252A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020237009562A KR20230058652A (en) 2020-09-01 2021-08-31 Planar robot dead reckoning with optical flow, wheel encoder and inertial measurement unit
EP21864971.3A EP4208316A4 (en) 2020-09-01 2021-08-31 Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit
JP2023514465A JP2023540304A (en) 2020-09-01 2021-08-31 Dead Reckoning of Planar Robots with Optical Flow, Wheel Encoders and Inertial Measurement Devices
US18/042,945 US20230333560A1 (en) 2020-09-01 2021-08-31 Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063073311P 2020-09-01 2020-09-01
US63/073,311 2020-09-01

Publications (1)

Publication Number Publication Date
WO2022051252A1 true WO2022051252A1 (en) 2022-03-10

Family

ID=80492105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/048365 WO2022051252A1 (en) 2020-09-01 2021-08-31 Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit

Country Status (5)

Country Link
US (1) US20230333560A1 (en)
EP (1) EP4208316A4 (en)
JP (1) JP2023540304A (en)
KR (1) KR20230058652A (en)
WO (1) WO2022051252A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045489A1 (en) * 2022-08-30 2024-03-07 海尔智家股份有限公司 User similarity calculation method and device based on sweeping robot, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230089452A1 (en) * 2021-09-22 2023-03-23 Hand Held Products, Inc. Apparatuses, computer-implemented methods, and computer program products for improved object pathing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080004796A1 (en) * 2006-06-30 2008-01-03 Wolfgang Hans Schott Apparatus and method for measuring the accurate position of moving objects in an indoor environment
US20120219207A1 (en) * 2009-10-30 2012-08-30 Yujin Robot Co., Ltd. Slip detection apparatus and method for a mobile robot
US20140195185A1 (en) * 2013-01-07 2014-07-10 Kionix, Inc. Angular Velocity Estimation Using a Magnetometer and Accelerometer
US8972057B1 (en) * 2013-01-09 2015-03-03 The Boeing Company Systems and methods for generating a robotic path plan in a confined configuration space
US20160052136A1 (en) * 2014-08-25 2016-02-25 Google Inc. Natural Pitch and Roll

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9427867B2 (en) * 2014-04-02 2016-08-30 The Boeing Company Localization within an environment using sensor fusion
JP7275553B2 (en) * 2018-12-10 2023-05-18 ソニーグループ株式会社 MOBILE BODY, MOBILE BODY CONTROL METHOD AND PROGRAM

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080004796A1 (en) * 2006-06-30 2008-01-03 Wolfgang Hans Schott Apparatus and method for measuring the accurate position of moving objects in an indoor environment
US20120219207A1 (en) * 2009-10-30 2012-08-30 Yujin Robot Co., Ltd. Slip detection apparatus and method for a mobile robot
US20140195185A1 (en) * 2013-01-07 2014-07-10 Kionix, Inc. Angular Velocity Estimation Using a Magnetometer and Accelerometer
US8972057B1 (en) * 2013-01-09 2015-03-03 The Boeing Company Systems and methods for generating a robotic path plan in a confined configuration space
US20160052136A1 (en) * 2014-08-25 2016-02-25 Google Inc. Natural Pitch and Roll

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP4208316A4 *
YI ET AL.: "Kinematic modeling and analysis of skid-steered mobile robots with applications to low-cost inertial-measurement-unit-based motion estimation", IEEE TRANSACTIONS ON ROBOTICS, vol. 25, no. 5, 28 July 2009 (2009-07-28), pages 1087 - 1097, XP011332916, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/abstract/document/5175357> [retrieved on 20211019], DOI: 10.1109/TRO.2009.2026506 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024045489A1 (en) * 2022-08-30 2024-03-07 海尔智家股份有限公司 User similarity calculation method and device based on sweeping robot, and storage medium

Also Published As

Publication number Publication date
KR20230058652A (en) 2023-05-03
JP2023540304A (en) 2023-09-22
EP4208316A4 (en) 2024-04-03
EP4208316A1 (en) 2023-07-12
US20230333560A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
JP4967062B2 (en) A method to estimate the appropriate motion of an object using optical flow, kinematics and depth information
US20230333560A1 (en) Planar robots dead-reckoning with optical flow, wheel encoder and inertial measurement unit
KR100886340B1 (en) Apparatus and method for calibrating gyro-sensor of mobile robot
Chong et al. Accurate odometry and error modelling for a mobile robot
JP5219467B2 (en) Method, apparatus, and medium for posture estimation of mobile robot based on particle filter
US8855819B2 (en) Method and apparatus for simultaneous localization and mapping of robot
Kümmerle et al. Simultaneous calibration, localization, and mapping
Kümmerle et al. Simultaneous parameter calibration, localization, and mapping
JP2017215193A (en) Information processing device and method therefor
JP2011209203A (en) Self-position estimating device and self-position estimating method
US20220299321A1 (en) Methods and apparatus for dynamic optical flow sensor calibration by imu in planar robot systems
US20220299543A1 (en) Gyroscope and optical flow sensor scale calibration
US11874666B2 (en) Self-location estimation method
Nobre et al. Drift-correcting self-calibration for visual-inertial SLAM
WO2023273311A1 (en) Method and apparatus for scale calibration and optimization of a monocular visual-inertial localization system
KR20110081701A (en) Calibration apparatus for gyro sensor
KR101619365B1 (en) Apparatus and method for simultaneous localization and mapping of robot
Yu et al. Vins-motion: tightly-coupled fusion of vins and motion constraint
WO2023273310A1 (en) Method and apparatus for optimization of a monocular visual-inertial localization system
KR102406240B1 (en) Robust stereo visual inertial navigation apparatus and method
KR102187311B1 (en) Apparatus and Method for Estimating Initial Position of Mobile Robot
Qin et al. A novel foot contact probability estimator for biped robot state estimation
Zhang et al. A robust regression model for simultaneous localization and mapping in autonomous mobile robot
Kanhere Measurement consensus metric aided lidar localization and mapping
KR101855870B1 (en) Method and apparatus for estimation of rotation in 3 dimensional space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21864971

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023514465

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20237009562

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021864971

Country of ref document: EP

Effective date: 20230403