WO2016196717A2 - Mobile localization using sparse time-of-flight ranges and dead reckoning - Google Patents
Mobile localization using sparse time-of-flight ranges and dead reckoning Download PDFInfo
- Publication number
- WO2016196717A2 WO2016196717A2 PCT/US2016/035396 US2016035396W WO2016196717A2 WO 2016196717 A2 WO2016196717 A2 WO 2016196717A2 US 2016035396 W US2016035396 W US 2016035396W WO 2016196717 A2 WO2016196717 A2 WO 2016196717A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dead reckoning
- sensor
- data
- localization
- positional
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/86—Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/02—Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
- G01S13/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/40—Means for monitoring or calibrating
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
Definitions
- Embodiments of the present invention relate, in general, to estimation of an object's position and more particularly to mobile localization using sparse using time-of-flight ranges and dead reckoning.
- positioning services i.e., services that identify the position of an object, wireless terminal or the like
- a service provider may also use positioning services to provide position-sensitive information such as driving directions, local information on traffic, gas stations, restaurants, hotels, and so on.
- Other applications that may be provided using positioning services include asset tracking services, asset monitoring and recovery services, fleet and resource management, personal-positioning services, autonomous vehicle guidance, conflict avoidance, and so on. These various applications typically require the position of each affected device be monitored by a system or that the device be able to continually update its position and modify its behavior based on its understanding of its position.
- Various systems may be used to determine the position of a device.
- One system uses a map network stored in a database to calculate current vehicle positions. These systems send distance and heading information, derived from either GPS or dead reckoning, to perform map matching. In other versions Light Detection and Ranging (LiDAR) data and Simultaneous Localization and Mapping (SLAM) are used to identify features surrounding an object using lasers or optics. Map matching calculates the current position based, in one instance, on the network of characteristics stored in a database. Other maps can also be used such as topographical maps that provide terrain
- map matching has inherent inaccuracies because map matching must look back in time and match historical data to observed characteristics of a position. As such, map matching can only calibrate the sensors or serve as a position determining means when a position is identified on the map. If a unique set of characteristics cannot be found that match the sensor's position in an existing database, the position derived from this method is ambiguous. Accordingly, on a long straight stretch of highway or in a region with minimal distinguishing geologic or structural features, sensor calibration or position determination using map matching may not occur for a significant period of time, if at all.
- Dead reckoning is another means by which to determine the position of a device.
- dead reckoning is based on knowing an object's starting position and its direction and distance of travel thereafter.
- Current land-based dead reckoning systems use an object's speed sensors, rate gyros, reverse gear hookups, and wheel sensors to "dead reckon" the object position from a previously known position.
- Dead reckoning is susceptible to sensor error and to cumulative errors from aggregation of inaccuracies inherent in time-distance-direction measurements.
- systems that use odometers and reverse gear hookups lack portability due to the required connections.
- the systems are hard to install in different objects due to differing odometers' configurations and odometer data varies with temperature, load, weight, tire pressure and speed.
- a known way of localizing a robot involves "trilateration", which is a technique that determines position based on distance information from uniquely-identifiable ranging radios.
- the robot also has a form of odometer estimation on board as well, in which case the ranges and odometer can be fused in a Kalman filter.
- the vehicle odometer cannot be trusted on its own.
- what is really desired from the ranging radios is not a localization estimate, but a correction that can be applied to the odometer frame of reference to bring it inline with that of the actual robot's position. This transform should vary slowly over time, as such it can be assumed as a constant.
- GNSS Global Positioning System
- GLONASS Global Orbiting Navigation Satellite System
- a European Satellite System is on track to join the GNSS in the near future.
- GPS Global Positioning System
- GLONASS Global Orbiting Navigation Satellite System
- a European Satellite System is on track to join the GNSS in the near future.
- these global systems are comprised of constellations of satellites orbiting the earth.
- Each satellite transmits signals encoded with information that allows receivers on earth to measure the time of arrival of the received signals relative to an arbitrary point in time. This relative time-of-arrival measurement may then be converted to a "pseudo-range".
- the position of a satellite receiver may be accurately estimated (to within 10 to 100 meters for most GNSS receivers) based on a sufficient number of pseudo-range measurements.
- GPS/GNSS includes Navstar GPS and its successors, i.e., differential GPS (DGPS),
- Navstar is a GPS system which uses space-based satellite radio navigation, and was developed by the U.S. Department of Defense.
- Navstar GPS consists of three major segments: space, control, and end-user segments.
- the space segment consists of a constellation of satellites placed in six orbital planes above the Earth's surface. Normally, constellation coverage provides a GPS user with a minimum of five satellites in view from any point on earth at any one time.
- the satellite broadcasts a RF signal, which is modulated by a precise ranging signal and a coarse acquisition code ranging signal to provide navigation data.
- This navigation data which is computed and controlled by the GPS control segment for all GPS satellites, includes the satellite's time, clock correction and ephemeris parameters, almanac and health status.
- the user segment is a collection of GPS receivers and their support equipment, such as antennas and processors which allow users to receive the code and process information necessary to obtain position velocity and timing measurements.
- GPS signals become weak, susceptible to multi-path interference, corrupted, or non-existent as a result of terrain or other obstructions. Such situations include urban canyons, indoor positions, underground positions, or areas where GPS signals are being jammed or subject to RF interference. Examples of operations in which a GPS signal is not accessible or substantially degraded include both civil and military applications, including, but not limited to: security, intelligence, emergency first-responder activities, and even the position of one's cellular phone.
- RF beacons radio frequency (RF) beacons.
- the position of a mobile node can be calculated using the known positions of multiple RF reference beacons (anchors) and measurements of the distances between the mobile node and the anchors.
- the anchor nodes can pinpoint the mobile node by geometrically forming four or more spheres surrounding the anchor nodes which intersect at a single point that is the position of the mobile node.
- this technique has strict infrastructure requirements, requiring at least three anchor nodes for a 2D position and four anchor nodes for a 3D position.
- the technique is further complicated by being heavily dependent on relative node geometry and suffers from the same types of accuracy errors as GPS, due to RF propagation complexities.
- RSS received signal strength
- AoA signal angle of arrival
- ToA signal time of arrival
- TDoA signal time difference of arrival
- Ambiguities using trilateration can be eliminated by deploying a sufficient number of anchor nodes in a mobile sensor network, but this method incurs the increased infrastructure costs of having to deploy multiple anchor nodes.
- INUs Inertial navigation units (INUs), consisting of accelerometers, gyroscopes and
- magnetometers may be employed to track an individual node's position and orientation over time. While essentially an extremely precise application of dead reckoning, highly accurate INUs are typically expensive, bulky, heavy, power-intensive, and may place limitations on node mobility. INUs with lower size, weight, power and cost are typically also much less accurate. Such systems using only inertial navigation unit (INU) measurements have a divergence problem due to the accumulation of "drift" error - that is, cumulative dead-reckoning error, as discussed above - while systems based on inter- node ranging for sensor positioning suffer from flip and rotation ambiguities.
- drift inertial navigation unit
- the positioning-determining means may include GPS, dead reckoning systems, range-based determinations and map databases, but each is application-specific. Typically, one among these systems will serve as the primary navigation system while the remaining position-determining means are utilized to recalibrate cumulative errors in the primary system and fuse correction data to arrive at a more accurate position estimation.
- Each determining means has its own strengths and limitations, yet none identifies which of all available systems is optimized, at any particular instance, to determine the object's position.
- the prior art also lacks the ability to identify which of the available positioning systems is unreliable or has failed and which among these systems is producing, at a given instant in time, the most accurate position of an object. Moreover, the prior art does not approach position estimation from a multimodal approach, but rather attempts to "fuse" collected data to arrive at a better - but nonetheless, unimodal - estimation. What is needed is an Adaptive Positioning System that can analyze data from each of a plurality of positioning systems and determine - on an iterative basis - which systems are providing the most accurate and reliable positional data, to provide a precise, multimodal estimation of position across a wide span of environmental conditions.
- features of the adaptive localization system of the present invention includes mobile localization (i.e., position determination) of an object using sparse time-of-flight data and dead reckoning.
- Mobile localization of an object positional frame of reference using sparse time-of-flight data and dead reckoning is accomplished, according to one embodiment of the present invention by creating a dead reckoning local frame of reference wherein the dead reckoning local frame of reference includes an estimation of localization (position) of the object with respect to known locations of one or more Ultra Wide Band (UWB) transceivers.
- UWB Ultra Wide Band
- a conversation is initiated with the one or more of the Ultra Wide Band transceivers within the predetermined range.
- the object collects range data between the object and the one or more UWB transceivers.
- the localization system uses multiple conversations to establish accurate range and bearing information to update the object positional frame of reference based on the collected range data.
- a localization of an object having an object positional frame of reference using sparse time- of-flight data and dead reckoning includes one or more transmitters and a dead reckoning module wherein the dead reckoning module creates a dead reckoning local frame of reference.
- a time-of-flight module responsive to the object being within range of one or more transmitters based on the dead reckoning local frame of reference, establishes a conversation with a transmitter to collect range data between the object and the transmitter.
- a localization module - which is communicatively coupled to the dead reckoning module and the time-of-flight module - updates the object positional frame of reference based on the collected data.
- embodiments of the present invention discussed below provide a way to mitigate individual sensor limitations through use of an adaptive positioning methodology that evaluates the current effectiveness of each sensor's contribution to a positional estimation by iteratively comparing it with the other sensors currently used by the system.
- the APS creates a modular framework in which the sensor data is fully utilized when it is healthy, but that data is ignored or decremented in importance when it is not found to be accurate or reliable. Additionally, when a sensor's accuracy is found to questionable, other sensors can be used to determine the relative health of that sensor by reconciling errors at each of a plurality of unimodal positional estimations.
- APS provides a means to intelligently fuse and filter disparate data to create a highly reliable, highly accurate positioning solution.
- the APS is unique because it is designed around the expectation of sensor failure.
- the APS is designed to use commonly- used sensors, such as dead reckoning, combined with sub-optimally-utilized sensors such as GPS, with unique sensors, such as Ultra-Wideband (UWB), providing critical redundancy in areas where other sensors fail.
- Each sensor system provides data to arrive at individual estimations of the position of a plurality of, in one embodiment, "particles".
- a "particle” is defined as an unimodal positional estimate from a single sensor or set of sensors. The system thereafter uses a multimodal estimator to identify a positional estimation based on the scatter-plot density of the of particles.
- the invention assesses the multimodal approach using techniques well known in the art, but it further applies historical data information to analyze which sensors may have failed in their positional estimations as well as when sensor data may be suspect, based on historical failures or degraded operations. Moreover, the present invention uses its understanding of historical sensor failure to modify an object's behavior to minimize degraded-sensor operations and to maximize the accuracy of positional estimation.
- UWB peer-to-peer in conjunction with the use of UWB depth-imaging radar.
- UWB peer-to-peer (P2P) ranging had been used for positioning, but had always suffered from the need for environmental infrastructure.
- APS employs a systematic combination of UWB P2P ranging and UWB radar ranging to provide a flexible positioning solution that does not depend on coverage of P2P modules within the environment. This is a key advantage for the long-term use of technologies like self-driving cars and related vehicles, especially during the pivotal time of early adoption where the cost and effort associated with introduction of UWB modules into the environment will mean that solutions not requiring complete coverage have a big advantage.
- An important aspect of the Adaptive Positioning System is the ability to use radar depth imagery to provide a temporal and spatial context for reasoning about position within the local frame of reference. This complements various other means of P2P ranging, such as the use of UWB modules to establish landmarks.
- P2P ranging include acoustical ranging, as well as a variety of other time-of-flight and/or time-of-arrival techniques. These P2P ranging methods are very useful for removing error in the positioning system, but they are only useful when they are available.
- the depth image from the UWB Radar Depth Imagery System (URDIS) can make use of almost any feature already existing in the environment. The function of this URDIS technology will be further discussed in the invention description section that follows.
- URDIS allows the APS system to reference organic, ubiquitous features in the world either as a priori contextual backdrops or as recent local-environment representations that can be used to reduce odometric drift and error until the next active landmark (i.e.
- Figure 1 shows a high-level block diagram illustrating components of an Adaptive Positioning System according to one embodiment of the present invention
- Figure 2 illustrates one embodiment of a multimodal positional estimation from an
- Figure 3 shows a high-level block diagram of a multimodal estimator and a unimodal estimator as applied within an Adaptive Positioning System
- Figures 4A and 4 B are depictions of multimodal positional estimation including the recognition - and removal from consideration - of a failed sensor, according to one embodiment of the APS;
- Figure 5 presents a high-level flowchart for a methodology according to one embodiment of the present invention to combine unimodal and multimodal estimation to determine an object's positions;
- Figure 6 is a flowchart of a methodology, according to one embodiment of the present invention, for predicting the state of an object using a unimodal estimator
- Figure 7 provides a basic graphical representation of a multimodal approach to adaptive positioning according to one embodiment of the present invention.
- Figure 8 is a flowchart for one multimodal embodiment for positional estimation
- Figure 9 is a top-view illustration of an overlay of a mission objective path with historical sensor failure data used to revise and optimize movement of an object to minimize sensor failure;
- FIG 10 is a flowchart of another method embodiment for multimodal adaptive
- Figure 11 is a flowchart of one method embodiment for modifying an object's behavior based on historical sensor failure data;
- Figure 12 a graphical depiction of combined dead reckoning and high precision time-of- flight ranging radio to identify an optimized path;
- Figure 13 is a high level block diagram for a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning according to one embodiment of the present invention
- Figure 14 is a flowchart method embodiment according to the present invention for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning;
- Figure 15 is a representation of a computing environment suitable for implementation of the Adaptive Positioning System of the present invention.
- An Adaptive Positioning System synthesizes a plurality of positioning systems by employing a variety of different, complementary methods and sensor types to estimate the position of the object while at the same time assessing the health/performance of each of the various sensors providing positioning data. All positioning sensors have particular failure modes or certain inherent limitations which render their determination of a particular position incorrect. However, these failure modes and limitations can be neither completely mitigated nor predicted.
- the various embodiments of the present invention provide a way to mitigate sensor failure through use of an adaptive positioning method that iteratively evaluates the current effectiveness of each sensor/technique by comparing its contribution to those of other sensors/techniques currently available to the system.
- the APS of the present invention creates a modular framework in which the sensor data from each sensor system can, in real time, be fully utilized when it is healthy, but also ignored or decremented when it is not found to be inaccurate or unreliable. Sensors other than the sensor being examined can be used to determine the relative health of the sensor in question and to reconcile errors. For example, obscurants in the air such as dust, snow, sand, fog and the like make the positioning determination of an optical-based sensor suspect; in such a case that sensor's data should be discarded or used with caution.
- Ultra Wide Band (UWB) ranging and radar are unaffected by obscurants, though each may experience interference from strong electromagnetic signals in the environment.
- the Adaptive Positioning System of the present invention provides a means to intelligently fuse and filter this disparate data to create a highly reliable, highly accurate positioning solution.
- the APS is designed around the expectation of sensor failure.
- the APS is designed to varyingly combine commonly-used positional sensor estimations, such as those derived from dead reckoning, GPS and other unique sensors such as Ultra-Wideband (UWB) sensors - all of which provide critical redundancy in areas where unimodal-only systems fail - to arrive at a plurality of unimodal positional estimations for an object.
- UWB Ultra-Wideband
- Each of these estimations feeds into a multimodal estimator that analyzes the relative density of unimodal estimations to arrive at a likely position of the object.
- the process is iterative, and in each instant of time not only may each unimodal estimate vary, but the multimodal estimation may vary, as well.
- any reference to "one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- a normal distribution is unimodal.
- a unimodal positional estimate is one that a single or unified estimation of the position of an object. Sensor data from a plurality of sensors may be fused together to arrive at a single, unimodal estimation.
- positional estimations using a multimodal approach receive inputs from a plurality of modalities that increases usability.
- the weakness or failure of one modality are offset by the strengths of another.
- the present invention relates in general to positional estimation and more particular to estimation of the position of an object.
- the object is a device, robot, or mobile device.
- a typical task is to identify specific objects in an image and to determine each object's position and orientation relative to some coordinate system. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object.
- the combination of position and orientation is referred to as the "pose" of an object, even though this concept is sometimes used only to describe the orientation. Exterior orientation and Translation are also used as synonyms to pose
- pose estimation The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation.
- the pose estimation problem can be solved in different ways depending on the image sensor configuration, and choice of methodology.
- an "algorithm” is a self-consi stent sequence of operations or similar processing leading to a desired result.
- algorithms and operations involve the manipulation of information elements. Typically, but not necessarily, such elements may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine.
- processing may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
- a machine e.g., a computer
- memories e.g., volatile memory, non-volatile memory, or a combination thereof
- registers e.g., temporary registers, or other machine components that receive, store, transmit, or display information.
- a component of the present invention is implemented as software
- the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer programming.
- the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention.
- One aspect of the present invention is to enhance and optimize the ability to estimate an object's position by identifying weaknesses or failures of individual sensors and sensor systems while leveraging the position-determining capabilities of other sensor systems.
- the limitations of GPS-derived positioning in urban areas or outdoor areas with similar line-of-sight limitations can be offset by range information from other sensors (e.g., video, radar, sonar, laser data, etc.).
- laser sensing - via LIDAR - can be used to fix positions using prominent and persistent topographic features, enabling APS to validate other systems' inputs and to enhance each system's accuracy. Even so, LIDAR is limited by the requirement for topography or other environmental features prominent enough and identifiable enough from which to fix an accurate LIDAR position. Thus other sensor systems are likewise incorporated into APS.
- APS can enhance sensor data elsewhere within the system and can also use the enhance position estimate to identify real-time changes in the environment. APS can then adjust according to these real-time changes, to improve perception and to provide reactive behaviors which are sensitive to these dynamic environments.
- dead reckoning uses a combination of components to track an object's position. The position will eventually degrade, however, over long distances, as a result of cumulative errors inherent in using dead-reckoning methodology.
- errors in dead reckoning can be mitigated, somewhat, by using, among other things, inertial sensors, in addition to traditional compass data.
- Dead reckoning thus complements GPS and other positioning capabilities, enhancing the overall accuracy of APS.
- enhanced dead reckoning can improve detection and mapping performance and increase the overall reliability of the system.
- UWB ultra wide band
- RFID radio frequency identification
- RF radio frequency identification
- tag systems comprise a reader with an antenna, a transmitter, and software such as a driver and middleware.
- One function of the UWB RFID system is to retrieve state and positional information (ID) generated by each tag (also known as a transponder).
- ID state and positional information
- tags are usually affixed to objects so that it becomes possible to locate where the goods are without a direct line-of-sight given the low frequency nature of their transmission.
- a tag can include additional information other than the ID.
- a RFID and/or UWB tag cannot only be associated with a piece of stationary infrastructure with a known, precise, position, but also provide active relative positioning between movable objects. For example, even if the two or more tags are unaware of their precise position that can provide accurate relative position.
- the tag can be connected to a centralized tracking system to convey interaction data. As a mobile object interacts with the tag of a known position, the variances in the object's positional data can be refined.
- a tag can convey not only relative position between objects but relative motion between objects as well. Such tags possess low-detectability and are not limited to line of sight nor are they vulnerable to jamming.
- a tag and tracking system can permit user/tag interaction anywhere from 200 feet to a range of up to two miles.
- tags offer relative position accuracy of approximately +/-12 cm for each interactive object outfitted with a tag.
- object is not intended to be limiting in any way. While the present invention is described by way of examples in which objects may be represented by vehicles or cellular telephones, an object is to be interpreted as an arbitrary entity that can implement the inventive concepts presented herein. For example, an object can be a robot, vehicle, aircraft, ship, bicycle, or other device or entity that moves in relation to another.
- the collaboration and communication described herein can involve multiple modalities of communication across a plurality of mediums.
- the active position tags of the present invention can also provide range and bearing information. Using triangulation and trilateration between tags, a route can be established using a series of virtual waypoints. Tags can also be used to attract other objects or repulse objects creating a buffer zone. For example, a person wearing a tag can create a 4-foot buffer zone which will result in objects not entering the zone to protect the individual. Similarly, a series of tags can be used to line a ditch or similar hazard to ensure that the object will not enter a certain region.
- multiple ranges between the active position tags can be used to create a mesh network of peer-to-peer (P2P) positioning where each element can contribute to the framework.
- P2P peer-to-peer
- Each module or object can vote as to its own position and subsequently the relative position of its nearest neighbors.
- the invention provides a means of supplementing the active tags with ranges to other landmarks. Thus when other active modules or objects are not present, not visible or not available, other sensors/modalities of the APS come into play to complement the mesh network.
- One novel aspect of the Invention is the use of URDIS modality as part of the dynamic positioning process. Almost every environment has some unique features that are uniquely identifiable with the URDIS sensor and therefore can be used effectively as a reference while the mobile system moves through distance and time.
- an a priori characterization of the environment by the URDIS sensor provides a contextual backdrop for positioning
- an ongoing local environment representation uses the last N depth scans from the URDIS to create a brief temporal memory of the environment that can be used for tracking relative motion
- the URDIS data can be used for identifying what is changing in the environment which allows those changes to be either added to the local environment map for future positioning purposes or discarded if they continue to move (in which case they are not useful for positioning).
- the unique time-coded ultra-wideband radio frequency pulse system of the present invention allows a single module to use multiple antennas to send and receive pulses as they reflect off of the local environment. By using the time coding in each pulse it is possible to differentiate the multi-path reflections. Using several antennae allows the UWB radio to be used as a depth imaging sensor because the differences in the time of flight observed from one antenna to the next allows the system to calculate the shape of the object causing the reflection based on the known positions of the various antennae.
- the UWB depth radar provides a means to find invariant features for positioning
- the present invention iteratively adapts and determines an object's position by
- FIG. 1 presents a high level block diagram of an Adaptive Positioning System according to one embedment of the present invention.
- the APS 100 includes a multimodal positional state estimator 120 which receives positional estimations from a plurality of positional sensors.
- different positional sensors include: 1) range based estimation 130 such as GPS or UWB technology, as well as combinations of peer to peer (P2P) active ranging system such as P2P ultra-wideband radios, P2P ultra-low power Bluetooth, P2P acoustic ranging, etc.; 2) dead reckoning systems 160 using some combination of wheel encoders, inertial sensors, compasses and tilt sensors; 3) direct relative frame measurement 170 and optical feature-based positioning using some combination of lasers, cameras, stereo vision systems, multi-camera systems and IR/thermal imaging systems; 4) bearing based positional estimations 180 such as trilateration including optical and camera systems; 5) range and bearing estimations 140 such as LiDAR and SLAM; and 6) inertial sensing systems 150 using an inertial measuring unit.
- range based estimation 130 such as GPS or UWB technology
- P2P peer to peer
- P2P active ranging system such as P2P ultra-wideband radios, P2P ultra-low power Bluetooth, P2P
- Each of these, and other positional sensor systems creates, and provides to multimodal state estimator 120, a unimodal estimation 190 of an object's position.
- the APS concurrently maintains each of a plurality of unimodal position estimation to enable the APS to determine which, if any, of the sensor system estimations have failed.
- Figure 2 is a graphic illustration of the positional estimation processes of the APS
- Figure 2 represent a positional estimation at an instant of time.
- the present invention's estimation of an object's position is iteratively and is continually updated based on determinations by each of a plurality of positional sensors.
- Figure 2 represents and APS having four positional sensors.
- Other versions may have more or fewer means to individually determine an object's position.
- Each of the sensors shown in figure two is represented by a unique geometric shape.
- a GPS sensor system may be represented by a circle while a dead reckoning system a rectangle.
- the APS of the present invention recognizes that an object's actual position 210 is almost invariably different than the position estimated by one or more positional sensors.
- the object's actual position 120 is represented by a small dot surrounded by a triangle.
- the remaining geometric figures represent each sensor's unimodal estimation of the object's position.
- a first sensor 220 represented by a circle estimates the object's position 225 slight above and to the right of the actual position 210.
- a second sensor 230 shown as an octagon estimates the position slightly to the right of the actual position 210.
- Sensor number three 250 shown as a hexagon estimates the position of the object 255 left of is actual position 210 while the last sensor 240 is displaced right 245.
- the present invention maintains each unimodal estimation and thereafter analyzes the individual estimations to ascertain whether one or more of the sensors has failed in a multimodal field. And upon detection of a failure, that particular estimation is disregarded or degraded.
- One aspect of the present invention is that the multimodal state estimation expects that the unimodal estimation derived from one or more sensors will fail. In the interest of clarity sensor failure is when the difference of the estimation of position by a sensor as compared to the other sensor estimates is greater than a predefined deviation limit or covariance. With each estimation by the plurality of sensors a degree of certainty is determined.
- a particular sensor's estimation is, for example, two standard deviations apart from the expected position estimate then that sensor may be considered to have failed and its contribution to the positional estimate removed.
- the deviation level used to establish sensor failure may vary and indeed may dynamically vary based on conditions.
- the present invention identifies from various positional sensors different spatial
- the present invention uses a non- Gaussian state representation.
- a Gaussian state or a single state with sensor fusion
- the uncertainty with respect to the object's position is uniformly distributed around where the system thinks the object is located.
- a Gaussian state (also referred to as a normal or unimodal state distribution) merges the individual positional determinations to arrive at a combined or best guess of where the object is located.
- the present invention by contrast merges unimodal or Gaussian state estimation with a non-Gaussian state based on continuous multimodal, discrete binary (unimodal) positions that are compared against each other yet nonetheless remain separate.
- the present invention outputs a certainty value based on its ability to reconcile multiple modalities.
- This certainty value can also be used as an input to modify the behavior of an object, such as a robot, tasked to accomplish a specific goal.
- the object' s/robot's behavior is modified in light of the new information as it works to accomplish the goal.
- a position estimation has a high degree of uncertainty the system directing the behavior of the object can recognize this and take specific behavioral action.
- Such modifications to the behavior of an object can result in the object slowing down, turning around, backing up, traveling in concentric circles, leaving an area where uncertainty is high or even remaining still and asking for help.
- each of these unimodal estimations each of which are arrived using a fusion of data collected by a plurality of sensors are treated as particles in a multimodal positional estimation state.
- a particle filter is then applied to determine a multimodal estimation of the position of the object.
- Particle filters work by generating a plurality of hypotheses.
- each unimodal positional estimate is a hypothesis as to the position of the object. They can be randomly generated and have a random distribution. But we have from our unimodal estimations (with some uncertainly) the position of the object on a map.
- Each particle or estimation can be given a weight or fitness score as to how likely it is indeed the position of our object. Of the plurality of particles, unimodal estimations, some are more likely than others to be accurate estimations of the position of the object. The unlikely particles or estimations are not of much use. New particles or estimations are generated but this time they are not random; they are based on the existing particles or estimations. Thus the particles are resampled in order to evolve most fit particles and still maintain uncertainty by letting a few less fit particles pass through every iteration of filter.
- the new sample or generation of particles is based on a model of where we think the object is located or has moved. Again the weights or fitness of each particle (estimation) is updated and resampling occurs. The particles are again propagated in time using the model and the process repeats.
- One embodiment of the present invention uses each positional state with a fitness score as a particle and thereafter applies particle filters. An algorithm places a score as to the fitness of each sensor's ability to estimate the position of the object by way of a particle.
- a state of a particle represents the position of (x, y, z, roll, pitch, yaw) of the object.
- Particle filters will spawn numerous (hundreds/thousands) of these particles which then individually estimate each new state when a new sensor reading is observed and then individually calculates their respective new states. With the information about new states, each particle is assigned weights by a specific cost criterion for that sensor and only the fittest particles survive an iteration.
- This approach allows multimodal state estimation where (as an example) 80% of amassed particles will contribute to the most certain position of the object while others can be at a different position. Hence, the density of these particles governs the certainty of the state the robot is in using a particle filter approach.
- Particle filter methodology is often used to solve nonlinear filtering problems arising in signal processing and Bayesian statistical inference.
- the filtering problem consists in estimating the internal states in dynamical systems when partial observations are made, and random perturbations are present in the sensors as well as in the dynamical system.
- the objective is to compute the conditional probability (a.k.a. posterior distributions) of the states of some Markov process, given some noisy and partial observations.
- Particle filtering methodology uses a genetic type mutation-selection sampling approach, with a set of particles (also called individuals, or samples) to represent the posterior distribution of some stochastic process given some noisy and/or partial observations.
- the state-space model can be nonlinear and the initial state and noise distributions can take any form required.
- Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions.
- RBPF is specific type of particle filter algorithm that allows integration of unimodal and multimodal type systems.
- a unique state set is defined for the particles of the filter (sensor failure modes are a state of the particle so that particle can predict failure modes and also drive down the weight for the particle to survive).
- sensor failure modes are a state of the particle so that particle can predict failure modes and also drive down the weight for the particle to survive).
- RBPF is used for estimating object state(s) using one type of sensor input, and can be used, as in the case of the present invention, with multiple types of sensors feeding into the same estimation system for tighter coupling and more robust failure detection of any particular sensor.
- FIG. 3 shows a high level architecture of one embodiment of the present invention.
- the invention utilizes, in one embodiment, a distributed positioning setup in which a multimodal module 330 receives inputs and updates from an onboard unimodal estimator 320.
- the unimodal estimator 320 receives separately positional estimations from each of a plurality of sensors 310.
- the multimodal estimator 330 can provide corrections to processing ongoing in the unimodal estimator 320.
- the multimodal estimator 330 can convey such information to the unimodal estimator that RF reception generally appears degraded. Accordingly, the unimodal estimator may devalue or degrade the positional estimation of UWB or other sensors that similar in operation to the GPS sensor. This data is then used to update a sensor's probability of failure or degraded operation (defined a sensor "heatmap") from prior information for future position evaluations. Thus, each particle can use noisy sensor data to estimate its location using history from the sensor heatmap.
- the presentation invention also uses range measurements to both moving and stationary positioning landmarks as long as the position of the landmark is known.
- One element of the invention is that even when no fixed landmark is within view (or perhaps not even present at all), the presence of moving landmarks (e.g. other cars and trucks, other robots, other mobile handheld devices) can serve to provide positioning references. Each of these can contribute to a coherent position estimate for the group.
- each module/entity is essentially given a vote on its own position and each module can also contribute to a collaborative assessment of the validity of other modules position estimates.
- the APS dynamically balances dependence on active range modules (i.e.
- UWB active ranging tags with the use of passive landmarks (i.e. RFID tags), and organic features (i.e. an actual natural landmark or obstacle) that can be perceived through use of LiDAR, cameras, radar, etc.
- APS can use all of these or any combination and provides a systematic means for combining the ranges to these various landmarks.
- Each category of landmark has a filtering mechanism specific to that category. After the filtering is finished the value of each range estimate can be determined by comparing multiple estimates. There are multiple steps to ascertaining the value of each estimate: a) comparison to previous N recent range readings from the particular sensor (once adjusted as per recent motion); b) comparison to previous M recent range readings from the particular sensor category (once adjusted as per recent motion).; c) comparison between disparate landmark categories.
- the current invention provides a standardized means for incorporating all this disparate information without needing to modify the algorithm.
- the operational benefit is that a single system can utilize a spectrum of different landmarks depending on the environment, situation and the type of vehicle. Another advantage is that environmental obscurants or obstacles which interfere with one type of landmark (i.e. visual) will not interfere with others (UWB tag).
- Another scheme can be using range based measurements to fixed but known position landmarks using 2D scans but preferably will involve depth imagery as it is much more useful for calculating position and especially orientation.
- Input for this category can come from 2D or 3D RADAR depth imagery, LiDAR 2D or 3D scans, 2D or 3D stereo vision data and any other means that provides a 2D or 3D depth image that can be used for positioning.
- UWB depth imagery represents an important component to the APS and is innovative as a component to positioning in general. All of the depth imagery is filtered against previous depth imagery just as was the case for the range- based positioning (discussed in the previous section).
- each sensor modality has an appropriate filtering mechanism tailored to that modality (i.e. LiDAR, UWB Radar, stereo vision, etc.).
- a map matching algorithm is used to match the current scan into a semi-permanent 3D model of the local environment.
- the output is also fed into one or more separate map-matching modules that can then use the enhanced position estimate to detect change, based on contrasting the new scans with enhanced positions against the existing map.
- This is essentially a form of rolling spatial memory used to track motion and orientation on the fly, identify objects moving in the environment and calculate the validity of each new depth image.
- each new depth image can be determined in a number of ways: a) comparison to previous N recent depth scans from the particular sensor once adjusted as per recent motion; b) comparison to previous M recent depth scans from the particular sensor category (e.g. LiDAR, Stereo vision, UWB radar depth imagery sensor) once adjusted as per recent motion).; c) comparison of depth image to other modalities.
- the current invention provides a standardized means for incorporating all this disparate depth imagery without needing to modify the APS algorithm.
- the operational benefit is that a single system can utilize a spectrum of different landmarks depending on the environment, situation and the type of vehicle. Another advantage is that environmental obscurants or obstacles which interfere with the use of one depth scan (i.e. vegetation that obstructs a LiDAR) will not interfere with others (i.e. UWB radar depth imagery system can see through vegetation).
- APS uses map matching over multiple time steps to calculate changes in 2D or 3D motion and orientation in reference to the persistent 3D model. Also, in the face of positioning uncertainty, APS can evaluate positioning hypotheses within the temporal and spatial context of the ongoing 3D depth image. Thus, APS provides a way to combine depth imagery from different sensor modalities into a single approach. Just as multiple sensor modalities can produce peer to peer ranges within the APS (see previous section), it is also possible for multimodal 3D depth images to be incorporated by the APS system.
- the multimodal estimator can use range and bearing measurements of a LiDAR system simultaneously with landmark positional estimates from SLAM and from the UWB radar depth imagery.
- fiducials are defined as artificial targets selected and used based on their ability to be easily detected within the depth imagery
- the fiducials can be located in the environment can be used to further feature identification.
- active tags provide a guaranteed means to do peer to peer ranging (see the previous section)
- the fiducials provide a means to facilitate motion estimation and positioning based on the depth imagery.
- Dead reckoning is a module within the APS scheme algorithms. This module is distinct from the map-matching and range-based modules but within the APS framework it is able to use the position and motion estimates outputted by the other modules in order to identify errors and improve accuracy.
- the dead reckoning module fuses and filters wheel encoders, inertial data and compass information to produce an estimate of motion and position.
- the dead-reckoning module's estimate of motion is usually excellent and updates more quickly and computationally more efficiently than any other module.
- the position estimate of the dead reckoning module if used independent of the other modules, will drift indefinitely.
- APS uses the motion output of the dead-reckoning module to fill in the temporal and spatial "gaps" which may occur between identified features and landmarks. It also may fill gaps between successful depth image scan matches. This need may occur when features are not available, such as in a wide open field.
- the output of the dead reckoning module can be accessed by the algorithms for trilateration and map-matching described above. This allows those other modules to recognize certain kinds of sensor failure or erroneous position calculations that can sometimes occur if a landmark is replaced or misplaced or if multiple areas within the environment have similar depth characteristics that introduce uncertainty. Additional schemes can be developed based on sensor capability and characteristics to identify sensor failure and to recalibrate or optimize existing sensor estimations.
- the multimodal estimation of the present invention assumes each sensor will, in varying conditions, fail. No sensor provides perfect information. Each sensor's output is an estimation of an object's actual position and the accuracy of that estimation varies. This is distinct from modeling the noise or inaccuracy of the sensor but rather when the estimation is position is simply incorrect. Thus there are conditions in which the sensor has failed and is providing an incorrect position even though there is such no indication. Thus a question becomes when has an estimation, which appears to be sound, failed. [00097]
- the Multimodal approach of the present invention is a sustained belief that there are several acceptable positions at any one instance of time. The present invention maintains all positional beliefs until evidence eliminates one or more beliefs. Moreover, an iterative process continually reassesses each estimation and the correlation of each estimation to narrow down the options of position using, in one embodiment historical data related to sensor failure or degradation.
- Figures 4 A and 4B provides a simple rendition of the multimodal estimator's ability to continually assess and resolve sensor failure.
- Figure 4 shows historical path of an object 410.
- the object's position is identified in 4 discrete positions by a plurality of sensors. Initially the position of the object 410 is estimated by the unimodal sensors to be within the depicted circle 415. Along the path exists landmarks or other features that enable one or more sensors to determine the object's position. In this case assume that 4 transmitters 420, 440, 445, 460 are positioned with known positions along the proposed path. At each of the four positions indicated in figure 4 the object receives from one or more of these transmitters range and bearing information.
- the towers are objects which can be detected using an optical or laser sensor.
- the position estimation determined by the two towers is correlated with dead reckoning data.
- a similar occurrence exits as the object 410 moves from the second position 425 to the third position 435.
- the positional estimations are compared at discrete positions, in operation the comparison of various sensor estimations it iterative and continual.
- the system of the present invention using historical data, can more favorably consider the dead reckoning data and that from the upper tower 445 rather than the new information from the lower tower 460.
- the unimodal observation based on data from the lower tower 460 is considered to have failed and is disregarded.
- the present invention can assess historical data that may indicate that when the object is in its current position data from the lower tower 460 is unreliable.
- the APS system resolves that the upper final position 470 is a more accurate representation of the object's actual position.
- the historical heatmap is updated based in this failed sensor.
- Historical analysis of positional estimations can assist in the determination of whether a sensor or sensors have failed or are likely to fail if the object is moving toward a particular position. Turning back to the last example, if no historical data had been available each alternative final position 470, 465 would be equally likely.
- the present invention assesses that is the probability that a particular sensor will fail as well as what is the probability of sensor failure given that another sensor has failed in the past.
- Figure 5 presents a high level flow chart for a methodology according to one embodiment of the present invention to combine unimodal and multimodal estimation to determine an object's positions.
- the Adaptive Positioning System of the present invention begins by receiving 510 sensor data from a plurality of sensors. Using that information, a unimodal estimation 520 is created of the object's state as well as a degree of uncertainly based on sensor measurements and historical data. For example, each type of sensor may have differing levels of accuracy or the ability to present an object's position. In addition, the certainty of the sensor to provide optimal data may be further reduced based on the object's currently position. Such as a GPS system in an urban environment.
- the present invention thereafter evaluates 530 the fitness of each state estimation using multimodal techniques.
- One unique aspect of the present invention is to combine unimodal and multimodal positional estimation to provide an accurate and reliable positional estimation.
- This determination 540 of the most likely positional state of the object is gained by considering the fitness or health of each individual state estimation.
- each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations can, in one embodiment, be implemented by computer program instructions.
- These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine such that the instructions that execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks.
- These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed in the computer or on the other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
- blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions. [000105] As suggested above each sensor provides a prediction as to the position of the object.
- Figure 6 is a flowchart of a methodology, according to one embodiment of the present invention, for predicting the state of an object using a unimodal estimator. As before the process begins with the object receiving 610 sensor data. Information from a plurality of positional sensors associated with the object can seek and gain information to ascertain the object's position.
- the APS predicts 620 where the object is likely to be located or its state using measurements models for each sensor and historical data. For example, a dead reckoning sensor system models the object's movement based on speed and time. If the object is, at its last observation, moving at 1 m/s and the new observation is one section later, the APS would predict that the object would have moved 1 meter.
- the system estimates 630 the position and state of the object as well as any uncertainly that may exist.
- the APS expects the object to have moved 1 meter but the new observation estimates that the vehicle has moved 1.1 meters. Uncertainty exist whether the new estimation is correct or whether the prediction is correct.
- historical data is updated 640 and used in future estimation.
- FIG. 7 provides a basic graphical representation of a multimodal approach to adaptive positioning according to one embodiment of the present invention.
- the figure presents a plurality of unimodal positional estimations 710 as would be determined using the processes described herein.
- Each of the unimodal estimations may represents a positional estimate based on a variety of sensor systems.
- the upper line 720 is one representation of multimodal combination of the estimations. In this simple case the grouping of unimodal estimations where the multimodal curve peaks 730 makes it the most likely positional estimate.
- Figure 8 is a flowchart for one multimodal embodiment for positional estimation according to the present invention.
- a partial filter is applied to determine a multimodal estimation of an object's position.
- the process begins with the creation 810 of an particle multimodal state.
- For each particle within the multimodal state evaluate 820 the particle fitness using the unimodal process.
- Sensor failure is identified 850 by comparing the fitness of particles remaining to those particles that have been removed.
- a particle is evaluated based a defined cost function that evaluates the fitness of a particle. This cost incurs from the deviation of the particle state from the "most fit" state of a particle in the current pool.
- These states are the pose of the vehicle (x, y, z, roll, pitch, yaw) and can also include sensor failure modes. For example, GPS will have a binary failure mode, fit or multi-path, thus if the unfit particle has predicted a state with GPD in multi-path while particles from most densely populated region (fit particles) do not match then that particle will have a lower probability of existence after this iteration.
- the present invention also uses the performance of the particle filter to predict the correct state of the system to learn from its own failure. This information then can also be used to update the sensor heatmap for better future predictions.
- Another aspect of the present invention is behavioral integration of sensor failure
- a GPS estimation requires a clear line of sight between the receiver located on the object and four satellites orbiting overhead. The same is true with respect to trilateration from ranging transmitters. It is well known that GPS signals are degraded in canyons and urban environments. The primary reasons are either the lack of a clear, line-of-sight signal path due to obstructions, or else a condition known as "multipath". Multipath errors result from the reception of two or more instances of the same signal, with each instance having a different time of flight. The signal "bounces" off (i.e., is reflected from) terrain or structures.
- the receiver does not know what signal is truly a direct line of sigh reception or a reception of a signal that has been reflected and thus possesses a longer, incorrect time of flight.
- the behavior orchestration system can cue behaviors that address position uncertainty and also adjust behavior for safety and performance. When position certainty is low the system can initiate behaviors such as NeedToStop, AvoidAreaWithNoTags, AvoidAreaWithPoorPositionHistory,
- FIG. 9 is top view rendition of a proposed path of an object using the Adaptive Positioning System of the present invention that uses sensor heatmaps generated using observations from unimodal and multimodal state estimators. The object is assigned the task to move from point A 910 to point B 990. Historically several areas between the two points has been identified as experiences sensor failure. In this case objects traversing the lower area 920 have experienced failure of one type of positional sensor.
- the area on the upper portion of the page represents a similar region of failure 940 but one that is associated with a different type of positional sensor.
- the third area 930 immediately above the staring point represents degradation or failure of yet a third type of sensor.
- Each area of sensor failure represents a gradient of risk. In this cased the center of the area has a higher likelihood of sensor failure than the outer boarders.
- each of the areas of risk many possess different gradients and levels of severity.
- the most direct route from point A 910 to point B 990 is a straight line 925. Such a path, however, would take is directly through an area of known sensor failure.
- One aspect of the present invention is to integrate the impact of unimodal positional sensor failures based on historical or otherwise obtained risk data on mission parameters.
- a route 925 fashioned between the areas of risk minimizes positional sensor failure.
- the present invention can assess the risk to the multimodal estimation of position as low. In this case since the risk of sensor failure occurs near the beginning of the path, dead reckoning and other sensors are still extremely accurate and thus mitigate the lost of, for example, UWB positional determination.
- Figures 10 and 11 present flowcharts depicting examples of the methodology which may be used adaptively estimate the position of an object using multimodal estimation.
- the unimodal estimator determines 1015 an estimated position of an object for each sensor. Each of these positional estimations is maintained 1020 for each instantiation of a period of time. Environmental factors are identified 1030 and considered as the system correlates 1035 each determined position. [000120] At this point the methodology inquires 1040 whether any of the positional determinations is outside of a predetermined correlation factor. If there are none that are outside of the predetermined correlation factor, the process returns to the beginning receiving new data 1010 and determining new positional estimates 1015. When a sensor is found to be outside of a correlation factor determination is made whether the sensor's estimate is degraded or the sensor has failed. In doing so a probability of failure is determined for each positional sensor to correctly identify 1045 one or more features used to make the positional determination.
- One of reasonable skill in the art will recognize that for each sensor the features that its uses to make a positional determination vary.
- the method thereafter determines a probability of failure by the sensor to correctly
- the Adaptive Positioning System of the present invention identifies for each sensor whether sensor failure 1055 has occurred.
- Positioning System of the present invention filters out that positional determination and eliminates its contribution to the overall assessment as to the object's estimated position. With the failed sensor removed from consideration the process begins anew and gain makes a determination as to whether this or other positional sensors have failed.
- Figure 11 is a flowchart of another method embodiment for integration of the Adaptive
- Positioning System of the present invention with an object's behavior.
- the process begins 1105 with the determination 1110 of a plurality of estimated positions based on a plurality of positional sensors.
- historical sensor failure data is retrieved 1120 for each of the positional sensors.
- the historical failure data is correlated with the current position of the object 1140 and with the mission objectives 1150. Based on the historical positional sensor failure data, the process concludes 1195 with the mission objective behavior being modified 1160 to minimize positional sensor failure while maintaining mission objectives.
- Another embodiment of the present invention combines dead reckoning (e.g. wheel odometers, inertial data, etc. or a combination thereof) and high precision time-of-flight ranging transmitters to localize precisely in environments with sparse update rates and/or radio / transmitter density.
- an object determines a local frame of reference using dead reckoning along the path traveled through space and thereafter performs a best-fit optimization of the path to available range data based on a signal transmission / reception, predefined transmitter locations and a time stamp associated with the ranging transmission.
- the location of the transmitters is known / defined a priori and each transmitter is uniquely identified by a time-of-flight conversation.
- the transmitter initiates a time-of-flight conversation with that transmitter.
- the conversation is time stamped and paired with the dead reckoning local frame position.
- a threshold number of conversations have taken place a non-linear least-squares optimization is run to update the transformation of the robot's local frame of reference to a global frame of reference.
- Each time-of-flight conversation defines a ranging sphere (shell) where the object at the corresponding time stamp could have been.
- a distance is defined from the local frame position estimate to the ranging sphere is defined (distance from position to beacon minus the range estimate) for each time stamped range-position pair.
- a non-linear, "least- squares" optimization is then run to find the dead reckoning path through the global frame with the least distance-based cost.
- Figure 12 presents three paths. Path 1 1210 and path 3 1220 represent the possible limits of drift with respect to dead reckoning technology. These two outside paths are the boundaries between which the actual path exists. Within those boundary paths 1210, 1220 exists a plurality of radio transmitters 1240, 1250, 1260, 1270, 1280 with known locations. Using ranging information from a known transmitter the actual path 1230 can be refined. As the object enters a region in which reception from a transmission is likely, a time-of-flight range reading can occur to refine the actual location of the object. In this case a path refinement occurs 6 times so as to produce an accurate depiction of the true path 1230.
- Another embodiment of the present invention allows for direct calculation of the drift of the odometry frame without first estimating the position of the robot with respect to the map.
- the design that follows describes the 2D case (X, Y, Phi) for this Kalman filter, but extension to the 3D case (X, Y, Z, Theta, Psi, Phi) should be obvious from the description.
- robot localization involves trilateration as described herein from ranging
- radios which are uniquely identifiable.
- robots generally have a form of odometry estimation on board wherein ranges and odometry can be fused using a Kalman filter.
- ranges and odometry can be fused using a Kalman filter.
- vehicle odometry becomes unreliable. A need therefore exists to bring odometry (aka dead reckoning) inline with that of the robot's actual position.
- the transform begins with the classic Kalman filter form:
- observation function is simply the distance from the infrastructure to the likely location of the object.
- Multipath can be filtered out by discarding large variations, once the algorithm has settled.
- FIG. 13 is a high level block diagram of a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning.
- the mobile localization system 1300 includes a dead reckoning module 1310, a time-of-flight module 1320 and a localization module 1330. Each module is communicatively coupled to each other to establish a positional location of an object.
- the time-of-flight module 1320 receives a ranging signal from a UWB transceiver 1350 of a known location.
- the time-of-flight module 1320 includes a list of known locations of a plurality of UWB transceivers 1350.
- the time-of-flight module 1320 establishes one or more conversations with the transceiver to ascertain range information. After receiving several conversations range and bearing information can be determined which is thereafter passed to the localization module 1330 which refines the objects positional location.
- Figure 14 is one method embodiment for mobile localization of an object using sparse time-of-flight data and dead reckoning.
- the method begins 1405 by creating 1410 a dead reckoning local frame of reference wherein the dead reckoning local frame of reference includes an estimation of localization of the object with respect to known locations of one or more Ultra Wide Band transceivers.
- the method uses the dead reckon local frame of reference, determines 1420 when the estimation of location of the object is within a predetermine range of one or more of the Ultra Wide Band transceivers.
- a "conversation" is initiated 1440 with at least one of the one or more of the Ultra Wide Band transceivers within the predetermined range.
- the object Upon receiving 1450 a signal from one or more UWB transceivers within the predetermined range, the object collects 1460 range data between the object and the one or more UWB transceivers. Using multiple conversations to establish accurate range and bearing information, the systems updates 1470 the object positional frame of reference based on the collected range data ending 1495 the process.
- Software programming code which embodies the present invention is typically accessed by a microprocessor from long-term, persistent storage media of some type, such as a flash drive or hard drive.
- the software programming code may be embodied on any of a variety of known media for use with a data processing system, such as a diskette, hard drive, CD-ROM, or the like.
- the code may be distributed on such media, or may be distributed from the memory or storage of one computer system over a network of some type to other computer systems for use by such other systems.
- the programming code may be embodied in the memory of the device and accessed by a microprocessor using an internal bus.
- program modules include routines, programs, objects, components, data
- FIG. 15 An exemplary system, shown in Figure 15, for implementing the invention a general purpose computing device 1500 such as the form of a conventional personal computer, a personal communication device or the like, including a processing unit 1510, a system memory 1515, and a system bus that communicatively joins various system components, including the system memory 1515 to the processing unit.
- the system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- the system memory generally includes read-only memory (ROM) 1520, random access memory (RAM) 1540 and a non-transitory storage medium 1530.
- a basic input/output system (BIOS) 1550 containing the basic routines that help to transfer information between elements within the personal computer, such as during start-up, is stored in ROM.
- the personal computer may further include a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk.
- the hard disk drive and magnetic disk drive are connected to the system bus by a hard disk drive interface and a magnetic disk drive interface, respectively.
- the drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the personal computer.
- a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning can include,
- time-of-flight module wherein, responsive to the object being within range of one of the one or more transmitters based on the dead reckoning local frame of reference, the time-of-flight module receives a signal from the transmitter and collects range data between the object and the transmitter;
- a localization module communicatively coupled to the dead reckoning module and the time-of-flight module, wherein the localization module updates the object positional frame of reference based on the collected range data.
- each transmitter includes a unique identifier
- transmitter location initiating an exchange of signals with the transmitter
- the localization module performs a "best-fit" optimization path based on range data, the predefined transmitter location and the time stamp;
- the best fit optimization is a "least-squares" optimization; and • responsive to conducting a plurality of signal exchanges between the transmitter and the object, performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
- a method for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning can include the steps,
- the dead reckoning local frame of reference includes an estimation of localization of the object with respect to known locations of one or more Ultra Wide Band (UWB) transceivers;
- UWB Ultra Wide Band
- Additional features of this method can include,
- each UWB includes a unique identifier
- the signal includes a time stamp associated with the conversation and paired with the dead reckoning local frame of reference; • wherein updating includes performing a "best- fit" optimization path based on collected range data, known locations of the one or more Ultra Wide Band (UWB) transceivers and the time stamp;
- UWB Ultra Wide Band
- updating includes performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
- Networks can also include mainframe computers or servers, such as a gateway computer or application server (which may access a data repository).
- a gateway computer serves as a point of entry into each network.
- the gateway may be coupled to another network by means of a communications link.
- the gateway may also be directly coupled to one or more devices using a communications link. Further, the gateway may be indirectly coupled to one or more devices.
- the gateway computer may also be coupled to a storage device such as data repository.
- modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions, and/or formats.
- the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects of the invention can be implemented as software, hardware, firmware, or any combination of the three.
- a component of the present invention is implemented as software
- the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
Mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning can be accomplished by creating a dead reckoning local frame of reference, including an estimation of object position with respect to known locations from one or more Ultra Wide Band transceivers. As the object moves along its path, a determination is made using the dead-reckoning local frame of reference. When the object is within a predetermine range of one or more of the Ultra Wide Band transceivers, a conversation is initiated, and range data between the object and the UWB transceiver(s) is collected. Using multiple conversations to establish accurate range and bearing information, the system updates the objects position based on the collected data.
Description
MOBILE LOCALIZATION USING SPARSE TIME-OF-FLIGHT RANGES
AND DEAD RECKONING
RELATED APPLICATION
[0001] The present application is a Continuation-In-Part of, and claims priority to, U.S. Non- Provisional Application 15/170665 filed 1 June 2016, U.S. Non-Provisional Application 15/149064 filed 6 May 2016 as well as relates to and claims the benefit of priority to United States Provisional Patent Applications no. 62/169689 filed 2 June 2015, all of which are hereby incorporated by reference in their entirety for all purposes as if fully set forth herein.
BACKGROUND OF THE INVENTION Field of the Invention.
[0002] Embodiments of the present invention relate, in general, to estimation of an object's position and more particularly to mobile localization using sparse using time-of-flight ranges and dead reckoning.
Relevant Background.
[0003] Service providers of all types have begun to recognize that positioning services (i.e., services that identify the position of an object, wireless terminal or the like) may be used in various applications to provide value-added features. A service provider may also use positioning services to provide position-sensitive information such as driving directions, local information on traffic, gas stations, restaurants, hotels, and so on. Other applications that may be provided using positioning services include asset tracking services, asset monitoring and recovery services, fleet and resource management, personal-positioning services, autonomous vehicle guidance, conflict avoidance, and so on. These various applications typically require the position of each affected device be monitored by a system or that the device be able to continually update its position and modify its behavior based on its understanding of its position.
[0004] Various systems may be used to determine the position of a device. One system uses a map network stored in a database to calculate current vehicle positions. These systems send distance and heading information, derived from either GPS or dead reckoning, to perform map matching. In other versions Light Detection and Ranging (LiDAR) data and Simultaneous Localization and Mapping (SLAM) are used to identify features surrounding an object using lasers or optics. Map matching calculates the current position based, in one instance, on the network of characteristics stored in a database. Other maps can also be used such as topographical maps that provide terrain
characteristics or maps that provide a schematic and the interior layout of a building. These systems also use map matching to calibrate other sensors. Map matching, however, has inherent inaccuracies because map matching must look back in time and match historical data to observed characteristics of a position. As such, map matching can only calibrate the sensors or serve as a position determining means when a position is identified on the map. If a unique set of characteristics cannot be found that match the sensor's position in an existing database, the position derived from this method is ambiguous. Accordingly, on a long straight stretch of highway or in a region with minimal distinguishing geologic or structural features, sensor calibration or position determination using map matching may not occur for a significant period of time, if at all.
[0005] Dead reckoning is another means by which to determine the position of a device.
Fundamentally, dead reckoning is based on knowing an object's starting position and its direction and distance of travel thereafter. Current land-based dead reckoning systems use an object's speed sensors, rate gyros, reverse gear hookups, and wheel sensors to "dead reckon" the object position from a previously known position. Dead reckoning is susceptible to sensor error and to cumulative errors from aggregation of inaccuracies inherent in time-distance-direction measurements. Furthermore, systems that use odometers and reverse gear hookups lack portability due to the required connections. Moreover, the systems are hard to install in different objects due to differing odometers' configurations and odometer data varies with temperature, load, weight, tire pressure and speed. Nonetheless, dead reckoning is substantially independent of environmental conditions and variations.
[0006] A known way of localizing a robot involves "trilateration", which is a technique that determines position based on distance information from uniquely-identifiable ranging radios. Commonly, the robot also has a form of odometer estimation on board as well, in which case the ranges and odometer can be fused in a Kalman filter. However, over large periods of time the vehicle odometer cannot be trusted on its own. As such, what is really desired from the ranging radios is not a localization estimate, but a correction that can be applied to the odometer frame of reference to bring it inline with that of the actual robot's position. This transform should vary slowly over time, as such it can be assumed as a constant.
[0007] Current ranging-radio localization techniques will have a difficult time scaling to
multiple vehicles or persons in the same area because of bandwidth limits. Current technology requires upwards of 40Hz range updates from at least four ranging radios to give precise estimates of global frame motion. This alone is nearly at the update limit of the technology. Fewer required ranging radios (no trilateration) and significantly lower update rates will allow the technology to expand to environments with many more objects localized.
[0008] The most well know positioning system is the Global Navigation Satellite System
(GNSS), comprised of the United States' Global Positioning System (GPS) and the Russian Federation's Global Orbiting Navigation Satellite System (GLONASS). A European Satellite System is on track to join the GNSS in the near future. In each case these global systems are comprised of constellations of satellites orbiting the earth. Each satellite transmits signals encoded with information that allows receivers on earth to measure the time of arrival of the received signals relative to an arbitrary point in time. This relative time-of-arrival measurement may then be converted to a "pseudo-range". The position of a satellite receiver may be accurately estimated (to within 10 to 100 meters for most GNSS receivers) based on a sufficient number of pseudo-range measurements.
[0009] GPS/GNSS includes Navstar GPS and its successors, i.e., differential GPS (DGPS),
Wide- Area Augmentation System (WAAS), or any other similar system. Navstar is a
GPS system which uses space-based satellite radio navigation, and was developed by the U.S. Department of Defense. Navstar GPS consists of three major segments: space, control, and end-user segments. The space segment consists of a constellation of satellites placed in six orbital planes above the Earth's surface. Normally, constellation coverage provides a GPS user with a minimum of five satellites in view from any point on earth at any one time. The satellite broadcasts a RF signal, which is modulated by a precise ranging signal and a coarse acquisition code ranging signal to provide navigation data. This navigation data, which is computed and controlled by the GPS control segment for all GPS satellites, includes the satellite's time, clock correction and ephemeris parameters, almanac and health status. The user segment is a collection of GPS receivers and their support equipment, such as antennas and processors which allow users to receive the code and process information necessary to obtain position velocity and timing measurements.
[00010] Unfortunately, GPS may be unavailable in several situations where the GPS signals
become weak, susceptible to multi-path interference, corrupted, or non-existent as a result of terrain or other obstructions. Such situations include urban canyons, indoor positions, underground positions, or areas where GPS signals are being jammed or subject to RF interference. Examples of operations in which a GPS signal is not accessible or substantially degraded include both civil and military applications, including, but not limited to: security, intelligence, emergency first-responder activities, and even the position of one's cellular phone.
[00011] In addition to GPS-type trilateration schemes another trilateration technique involves the use of radio frequency (RF) beacons. The position of a mobile node can be calculated using the known positions of multiple RF reference beacons (anchors) and measurements of the distances between the mobile node and the anchors. The anchor nodes can pinpoint the mobile node by geometrically forming four or more spheres surrounding the anchor nodes which intersect at a single point that is the position of the mobile node. Unfortunately, this technique has strict infrastructure requirements, requiring at least three anchor nodes for a 2D position and four anchor nodes for a 3D position. The technique is further complicated by being heavily dependent on relative node geometry
and suffers from the same types of accuracy errors as GPS, due to RF propagation complexities.
[00012] Many sensor networks of this type are based on position measurements using such
techniques which measure signal differences - differences in received signal strength (RSS), signal angle of arrival (AoA), signal time of arrival (ToA) or signal time difference of arrival (TDoA) - between nodes, including stationary anchor nodes.
Ambiguities using trilateration can be eliminated by deploying a sufficient number of anchor nodes in a mobile sensor network, but this method incurs the increased infrastructure costs of having to deploy multiple anchor nodes.
[00013] Inertial navigation units (INUs), consisting of accelerometers, gyroscopes and
magnetometers, may be employed to track an individual node's position and orientation over time. While essentially an extremely precise application of dead reckoning, highly accurate INUs are typically expensive, bulky, heavy, power-intensive, and may place limitations on node mobility. INUs with lower size, weight, power and cost are typically also much less accurate. Such systems using only inertial navigation unit (INU) measurements have a divergence problem due to the accumulation of "drift" error - that is, cumulative dead-reckoning error, as discussed above - while systems based on inter- node ranging for sensor positioning suffer from flip and rotation ambiguities.
[00014] Many navigation systems are hybrids which utilize a prescribed set of the aforementioned position-determining means to locate an object's position. The positioning-determining means may include GPS, dead reckoning systems, range-based determinations and map databases, but each is application-specific. Typically, one among these systems will serve as the primary navigation system while the remaining position-determining means are utilized to recalibrate cumulative errors in the primary system and fuse correction data to arrive at a more accurate position estimation. Each determining means has its own strengths and limitations, yet none identifies which of all available systems is optimized, at any particular instance, to determine the object's position.
[00015] The prior art also lacks the ability to identify which of the available positioning systems is unreliable or has failed and which among these systems is producing, at a given instant in time, the most accurate position of an object. Moreover, the prior art does not approach position estimation from a multimodal approach, but rather attempts to "fuse" collected data to arrive at a better - but nonetheless, unimodal - estimation. What is needed is an Adaptive Positioning System that can analyze data from each of a plurality of positioning systems and determine - on an iterative basis - which systems are providing the most accurate and reliable positional data, to provide a precise, multimodal estimation of position across a wide span of environmental conditions. Moreover, a need further exists to modify an object's behavior to accommodate the path- and speed-dependent accuracy requirements of certain positioning systems' position data. Additional advantages and novel features of this invention shall be set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the following specification or may be learned by the practice of the invention. The advantages of the invention may be realized and attained by means of the
instrumentalities, combinations, compositions, and methods particularly pointed out in the appended claims.
SUMMARY OF THE INVENTION
[00016] Features of the adaptive localization system of the present invention includes mobile localization (i.e., position determination) of an object using sparse time-of-flight data and dead reckoning. Mobile localization of an object positional frame of reference using sparse time-of-flight data and dead reckoning is accomplished, according to one embodiment of the present invention by creating a dead reckoning local frame of reference wherein the dead reckoning local frame of reference includes an estimation of localization (position) of the object with respect to known locations of one or more Ultra Wide Band (UWB) transceivers. As the object moves along its desired path the method, using the dead reckon local frame of reference, determines when the estimation of location of the object is within a predetermine range of one or more of the Ultra Wide Band transceivers. Thereafter a conversation is initiated with the one or more of the Ultra
Wide Band transceivers within the predetermined range. Upon receiving a signal from the one or more UWB transceivers within the predetermined range the object collects range data between the object and the one or more UWB transceivers. Using multiple conversations to establish accurate range and bearing information the localization system updates the object positional frame of reference based on the collected range data.
[00017] According to another embodiment of the present invention a system for mobile
localization of an object having an object positional frame of reference using sparse time- of-flight data and dead reckoning includes one or more transmitters and a dead reckoning module wherein the dead reckoning module creates a dead reckoning local frame of reference. With the dead reckoning frame of reference established, a time-of-flight module, responsive to the object being within range of one or more transmitters based on the dead reckoning local frame of reference, establishes a conversation with a transmitter to collect range data between the object and the transmitter. Using this range data a localization module - which is communicatively coupled to the dead reckoning module and the time-of-flight module - updates the object positional frame of reference based on the collected data.
[00018] Other features of an Adaptive Positioning System (APS) of the present invention
synthesize one or more unimodal positioning systems by utilizing a variety of different, complementary methods and sensor types to estimate the multimodal position of the object as well as the health/performance of the various sensors providing that position. Examples of different sensor types include: 1) GPS/GNSS; 2) dead-reckoning systems using distance-time-direction calculations from some combination of wheel encoders, inertial sensors, compasses, tilt sensors and similar dead-reckoning components; 3) optical, feature-based positioning using some combination of lasers, cameras, stereo- vision systems, multi-camera systems and multispectral/hyperspectral or IR/thermal imaging systems; 4) range-based positioning using some combination of peer-to-peer (P2P), active-ranging systems, such as P2P ultra-wideband radios, P2P ultra-low-power Bluetooth, P2P acoustic ranging, and various other P2P ranging schemes and sensors. 5) Radar-based positioning based on some combination of sensors such as Ultra-Wideband
(UWB) radar or other forms of radar, and various other sensor types commonly used in position determination.
[00019] Most individual sensors have limitations that cannot be completely overcome. The
embodiments of the present invention discussed below provide a way to mitigate individual sensor limitations through use of an adaptive positioning methodology that evaluates the current effectiveness of each sensor's contribution to a positional estimation by iteratively comparing it with the other sensors currently used by the system. The APS creates a modular framework in which the sensor data is fully utilized when it is healthy, but that data is ignored or decremented in importance when it is not found to be accurate or reliable. Additionally, when a sensor's accuracy is found to questionable, other sensors can be used to determine the relative health of that sensor by reconciling errors at each of a plurality of unimodal positional estimations.
[00020] APS provides a means to intelligently fuse and filter disparate data to create a highly reliable, highly accurate positioning solution. At its core the APS is unique because it is designed around the expectation of sensor failure. The APS is designed to use commonly- used sensors, such as dead reckoning, combined with sub-optimally-utilized sensors such as GPS, with unique sensors, such as Ultra-Wideband (UWB), providing critical redundancy in areas where other sensors fail. Each sensor system provides data to arrive at individual estimations of the position of a plurality of, in one embodiment, "particles". A "particle" is defined as an unimodal positional estimate from a single sensor or set of sensors. The system thereafter uses a multimodal estimator to identify a positional estimation based on the scatter-plot density of the of particles.
[00021] The invention assesses the multimodal approach using techniques well known in the art, but it further applies historical data information to analyze which sensors may have failed in their positional estimations as well as when sensor data may be suspect, based on historical failures or degraded operations. Moreover, the present invention uses its understanding of historical sensor failure to modify an object's behavior to minimize degraded-sensor operations and to maximize the accuracy of positional estimation.
Current approaches do not offer a systematic means to adapt behavior in response to
positioning requirements and/or contingencies. Unlike traditional approaches, APS suggests behavioral modifications or autonomously decides to take action to change behavior to improve positioning or to avoid likely areas of positional degradation.
[00022] One key aspect of the present invention is the novel introduction of UWB peer-to-peer ranging in conjunction with the use of UWB depth-imaging radar. Previously, UWB peer-to-peer (P2P) ranging had been used for positioning, but had always suffered from the need for environmental infrastructure. APS employs a systematic combination of UWB P2P ranging and UWB radar ranging to provide a flexible positioning solution that does not depend on coverage of P2P modules within the environment. This is a key advantage for the long-term use of technologies like self-driving cars and related vehicles, especially during the pivotal time of early adoption where the cost and effort associated with introduction of UWB modules into the environment will mean that solutions not requiring complete coverage have a big advantage.
[00023] An important aspect of the Adaptive Positioning System is the ability to use radar depth imagery to provide a temporal and spatial context for reasoning about position within the local frame of reference. This complements various other means of P2P ranging, such as the use of UWB modules to establish landmarks. Other examples of P2P ranging include acoustical ranging, as well as a variety of other time-of-flight and/or time-of-arrival techniques. These P2P ranging methods are very useful for removing error in the positioning system, but they are only useful when they are available. In contrast, the depth image from the UWB Radar Depth Imagery System (URDIS) can make use of almost any feature already existing in the environment. The function of this URDIS technology will be further discussed in the invention description section that follows.
[00024] URDIS allows the APS system to reference organic, ubiquitous features in the world either as a priori contextual backdrops or as recent local-environment representations that can be used to reduce odometric drift and error until the next active landmark (i.e.
another UWB module) can be located. Even if no other active landmark is identified, the use of URDIS and other UWB-radar systems offer the potential for dramatic improvements in positioning.
[00025] This method combines the benefits of a stereo-vision system with the benefits of LiDAR, but with the added advantage that because it uses UWB signals, it does not have line-of- sight limitations. This differentiates the current invention from strategies employing cameras or LiDAR because the latter are limited to line-of-sight measurements. With cameras and LiDAR, obstacles such as crowds of people, snow drifts, stacks of palettes, etc., obscure the view of would-be environmental landmarks and features. Worse, naturally-occurring obscurants such as rain, snow, fog and even vegetation further limit cameras and LiDAR.
[00026] Traditional approaches that try to match the current view of the environment to a pre- made map can be easily rendered useless by these obstacles/obscurants, but the proposed method of using a UWB Radar depth image allows APS to reference a much broader range of features and to locate these features even within cluttered and dynamic areas. By using UWB P2P ranging together with UWB depth-radar ranging, APS is able to leverage the benefits of active landmarks at known positions while at the same time producing accurate positioning results over long stretches in between those known landmarks by using UWB radar to positively fix organic, readily available features, all without the need for infrastructure changes to the environment.
[00027] The features and advantages described in this disclosure and in the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter; reference to the claims is necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[00028] The aforementioned and other features and objects of the present invention and the
manner of attaining them will become more apparent, and the invention itself will be best understood, by reference to the following description of one or more embodiments taken in conjunction with the accompanying drawings, wherein:
[00029] Figure 1 shows a high-level block diagram illustrating components of an Adaptive Positioning System according to one embodiment of the present invention;
[00030] Figure 2 illustrates one embodiment of a multimodal positional estimation from an
Adaptive Positioning System;
[00031] Figure 3 shows a high-level block diagram of a multimodal estimator and a unimodal estimator as applied within an Adaptive Positioning System;
[00032] Figures 4A and 4 B are depictions of multimodal positional estimation including the recognition - and removal from consideration - of a failed sensor, according to one embodiment of the APS;
[00033] Figure 5 presents a high-level flowchart for a methodology according to one embodiment of the present invention to combine unimodal and multimodal estimation to determine an object's positions;
[00034] Figure 6 is a flowchart of a methodology, according to one embodiment of the present invention, for predicting the state of an object using a unimodal estimator;
[00035] Figure 7 provides a basic graphical representation of a multimodal approach to adaptive positioning according to one embodiment of the present invention;
[00036] Figure 8 is a flowchart for one multimodal embodiment for positional estimation
according to the present invention;
[00037] Figure 9 is a top-view illustration of an overlay of a mission objective path with historical sensor failure data used to revise and optimize movement of an object to minimize sensor failure;
[00038] Figure 10 is a flowchart of another method embodiment for multimodal adaptive
positioning of an object according to the present invention;
[00039] Figure 11 is a flowchart of one method embodiment for modifying an object's behavior based on historical sensor failure data;
[00040] Figure 12 a graphical depiction of combined dead reckoning and high precision time-of- flight ranging radio to identify an optimized path;
[00041] Figure 13 is a high level block diagram for a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning according to one embodiment of the present invention;
[00042] Figure 14 is a flowchart method embodiment according to the present invention for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning; and
[00043] Figure 15 is a representation of a computing environment suitable for implementation of the Adaptive Positioning System of the present invention.
[00044] The Figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
DESCRIPTION OF THE INVENTION
[00045] An Adaptive Positioning System (APS) synthesizes a plurality of positioning systems by employing a variety of different, complementary methods and sensor types to estimate the position of the object while at the same time assessing the health/performance of each of the various sensors providing positioning data. All positioning sensors have particular failure modes or certain inherent limitations which render their determination of a particular position incorrect. However, these failure modes and limitations can be neither completely mitigated nor predicted. The various embodiments of the present invention provide a way to mitigate sensor failure through use of an adaptive positioning method that iteratively evaluates the current effectiveness of each sensor/technique by comparing its contribution to those of other sensors/techniques currently available to the system.
[00046] The APS of the present invention creates a modular framework in which the sensor data from each sensor system can, in real time, be fully utilized when it is healthy, but also
ignored or decremented when it is not found to be inaccurate or unreliable. Sensors other than the sensor being examined can be used to determine the relative health of the sensor in question and to reconcile errors. For example, obscurants in the air such as dust, snow, sand, fog and the like make the positioning determination of an optical-based sensor suspect; in such a case that sensor's data should be discarded or used with caution.
[00047] In contrast, Ultra Wide Band (UWB) ranging and radar are unaffected by obscurants, though each may experience interference from strong electromagnetic signals in the environment. The Adaptive Positioning System of the present invention provides a means to intelligently fuse and filter this disparate data to create a highly reliable, highly accurate positioning solution. At its core, the APS is designed around the expectation of sensor failure. The APS is designed to varyingly combine commonly-used positional sensor estimations, such as those derived from dead reckoning, GPS and other unique sensors such as Ultra-Wideband (UWB) sensors - all of which provide critical redundancy in areas where unimodal-only systems fail - to arrive at a plurality of unimodal positional estimations for an object. Each of these estimations feeds into a multimodal estimator that analyzes the relative density of unimodal estimations to arrive at a likely position of the object. The process is iterative, and in each instant of time not only may each unimodal estimate vary, but the multimodal estimation may vary, as well.
[00048] Embodiments of the present invention are hereafter described in detail with reference to the accompanying Figures. Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention.
[00049] The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the present invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and
modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
[00050] The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
[00051] By the term "substantially" it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
[00052] Like numbers refer to like elements throughout. In the figures, the sizes of certain lines, layers, components, elements or features may be exaggerated for clarity.
[00053] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more of such surfaces.
[00054] As used herein any reference to "one embodiment" or "an embodiment" means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
[00055] As used herein, the terms "comprises," "comprising," "includes," "including," "has,"
"having" or any other variation thereof, are intended to cover a non-exclusive inclusion.
For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
[00056] Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the specification and relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. Well-known functions or constructions may not be described in detail for brevity and/or clarity.
[00057] In the interest of clarity, and for purposes of the present invention, "unimodal" is a
probability distribution which has a single mode. A normal distribution is unimodal. As applied to the present invention a unimodal positional estimate is one that a single or unified estimation of the position of an object. Sensor data from a plurality of sensors may be fused together to arrive at a single, unimodal estimation.
[00058] In contrast "multimodal" is characterized by several different modes of activity or
occurrences. In this case positional estimations using a multimodal approach receive inputs from a plurality of modalities that increases usability. In essences the weakness or failure of one modality are offset by the strengths of another.
[00059] The present invention relates in general to positional estimation and more particular to estimation of the position of an object. Frequently the object is a device, robot, or mobile device. In robotics, a typical task is to identify specific objects in an image and to determine each object's position and orientation relative to some coordinate system. This information can then be used, for example, to allow a robot to manipulate an object or to
avoid moving into the object. The combination of position and orientation is referred to as the "pose" of an object, even though this concept is sometimes used only to describe the orientation. Exterior orientation and Translation are also used as synonyms to pose
[00060] The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation. The pose estimation problem can be solved in different ways depending on the image sensor configuration, and choice of methodology.
[00061] It will be also understood that when an element is referred to as being "on," "attached" to,
"connected" to, "coupled" with, "contacting", "mounted" etc., another element, it can be directly on, attached to, connected to, coupled with or contacting the other element or intervening elements may also be present. In contrast, when an element is referred to as being, for example, "directly on," "directly attached" to, "directly connected" to, "directly coupled" with or "directly contacting" another element, there are no intervening elements present. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed "adjacent" another feature may have portions that overlap or underlie the adjacent feature.
[00062] Some portions of this specification are presented in terms of algorithms or symbolic
representations of operations on data stored as bits or binary digital signals within a machine memory (e.g., a computer memory). These algorithms or symbolic
representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an "algorithm" is a self-consi stent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve the manipulation of information elements. Typically, but not necessarily, such elements may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as "data," "content," "bits," "values," "elements," "symbols," "characters," "terms," "numbers," "numerals," "words", or the like. These specific
words, however, are merely convenient labels and are to be associated with appropriate information elements.
[00063] Unless specifically stated otherwise, discussions herein using words such as
"processing," "computing," "calculating," "determining," "presenting," "displaying," or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
[00064] Likewise, the particular naming and division of the modules, managers, functions,
systems, engines, layers, features, attributes, methodologies, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions, and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects of the invention can be implemented as software, hardware, firmware, or any combination of the three. Of course, wherever a component of the present invention is implemented as software, the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer programming. Additionally, the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention.
[00065] One aspect of the present invention is to enhance and optimize the ability to estimate an object's position by identifying weaknesses or failures of individual sensors and sensor systems while leveraging the position-determining capabilities of other sensor systems. For example, the limitations of GPS-derived positioning in urban areas or outdoor areas
with similar line-of-sight limitations (e.g., mountainous areas, canyons, etc.) can be offset by range information from other sensors (e.g., video, radar, sonar, laser data, etc.).
According to one embodiment of the present invention, laser sensing - via LIDAR - can be used to fix positions using prominent and persistent topographic features, enabling APS to validate other systems' inputs and to enhance each system's accuracy. Even so, LIDAR is limited by the requirement for topography or other environmental features prominent enough and identifiable enough from which to fix an accurate LIDAR position. Thus other sensor systems are likewise incorporated into APS. By feeding the enhanced position estimate into a real-time map-matching algorithm, APS can enhance sensor data elsewhere within the system and can also use the enhance position estimate to identify real-time changes in the environment. APS can then adjust according to these real-time changes, to improve perception and to provide reactive behaviors which are sensitive to these dynamic environments.
[00066] As previously discussed, dead reckoning uses a combination of components to track an object's position. The position will eventually degrade, however, over long distances, as a result of cumulative errors inherent in using dead-reckoning methodology. Using the concepts of APS, errors in dead reckoning can be mitigated, somewhat, by using, among other things, inertial sensors, in addition to traditional compass data. Dead reckoning thus complements GPS and other positioning capabilities, enhancing the overall accuracy of APS. As with GPS and laser positioning, enhanced dead reckoning can improve detection and mapping performance and increase the overall reliability of the system.
[00067] Another aspect of the present invention is the use of one or more active position ultra wide band (UWB) transceivers or tags. Active tag tracking is not limited to line of sight and is not vulnerable to jamming. These ultra wide-band (UWB) radio frequency (RF) identification (ID) tag systems (collectively RFID) comprise a reader with an antenna, a transmitter, and software such as a driver and middleware. One function of the UWB RFID system is to retrieve state and positional information (ID) generated by each tag (also known as a transponder). Tags are usually affixed to objects so that it becomes possible to locate where the goods are without a direct line-of-sight given the low frequency nature of their transmission. A tag can include additional information other
than the ID. For example, using triangulation of the tag's position and the identity of a tag, heading and distance to the tag's position can be ascertained. A single tag can also be used as a beacon for returning to a specific position or carried by an individual or vehicle to affect a follow behavior from other like equipped objects. As will be appreciated by one of reasonable skill in the relevant art, other active ranging technology is equally applicable to the present invention and is contemplated in its use. The use of the term "UWB", "tags" or "RFID tags," or the like, is merely exemplary and should not be viewed as limiting the scope of the present invention.
In one implementation of the present invention, a RFID and/or UWB tag cannot only be associated with a piece of stationary infrastructure with a known, precise, position, but also provide active relative positioning between movable objects. For example, even if the two or more tags are unaware of their precise position that can provide accurate relative position. Moreover, the tag can be connected to a centralized tracking system to convey interaction data. As a mobile object interacts with the tag of a known position, the variances in the object's positional data can be refined. Likewise, a tag can convey not only relative position between objects but relative motion between objects as well. Such tags possess low-detectability and are not limited to line of sight nor are they vulnerable to jamming. And, depending on how mounted and the terrain in which they are implemented, a tag and tracking system can permit user/tag interaction anywhere from 200 feet to a range of up to two miles. Currently, tags offer relative position accuracy of approximately +/-12 cm for each interactive object outfitted with a tag. As will be appreciated by one or reasonable skill in the relevant art, the use of the term object is not intended to be limiting in any way. While the present invention is described by way of examples in which objects may be represented by vehicles or cellular telephones, an object is to be interpreted as an arbitrary entity that can implement the inventive concepts presented herein. For example, an object can be a robot, vehicle, aircraft, ship, bicycle, or other device or entity that moves in relation to another. The collaboration and communication described herein can involve multiple modalities of communication across a plurality of mediums.
[00069] The active position tags of the present invention can also provide range and bearing information. Using triangulation and trilateration between tags, a route can be established using a series of virtual waypoints. Tags can also be used to attract other objects or repulse objects creating a buffer zone. For example, a person wearing a tag can create a 4-foot buffer zone which will result in objects not entering the zone to protect the individual. Similarly, a series of tags can be used to line a ditch or similar hazard to ensure that the object will not enter a certain region. According to one or more embodiments of the current invention, multiple ranges between the active position tags can be used to create a mesh network of peer-to-peer (P2P) positioning where each element can contribute to the framework. Each module or object can vote as to its own position and subsequently the relative position of its nearest neighbors. Importantly, the invention provides a means of supplementing the active tags with ranges to other landmarks. Thus when other active modules or objects are not present, not visible or not available, other sensors/modalities of the APS come into play to complement the mesh network.
[00070] One novel aspect of the Invention is the use of URDIS modality as part of the dynamic positioning process. Almost every environment has some unique features that are uniquely identifiable with the URDIS sensor and therefore can be used effectively as a reference while the mobile system moves through distance and time. This reference functions in the following ways: a) an a priori characterization of the environment by the URDIS sensor provides a contextual backdrop for positioning; b) an ongoing local environment representation uses the last N depth scans from the URDIS to create a brief temporal memory of the environment that can be used for tracking relative motion and c) the URDIS data can be used for identifying what is changing in the environment which allows those changes to be either added to the local environment map for future positioning purposes or discarded if they continue to move (in which case they are not useful for positioning).
[00071] The unique time-coded ultra-wideband radio frequency pulse system of the present invention allows a single module to use multiple antennas to send and receive pulses as they reflect off of the local environment. By using the time coding in each pulse it is
possible to differentiate the multi-path reflections. Using several antennae allows the UWB radio to be used as a depth imaging sensor because the differences in the time of flight observed from one antenna to the next allows the system to calculate the shape of the object causing the reflection based on the known positions of the various antennae.
[00072] The UWB depth radar provides a means to find invariant features for positioning
purposes even when much of the rest of the environment may be moving. This is both because the radio pulses are not limited to line of sight and because the timing accuracy inherent to the UWB time-based approach allows for the careful discrimination of what is moving and changing in the environment. By accurately assessing in real-time what is moving and changing, it becomes much easier to identify the invariant landmarks that should be used for positioning. This ability to use coherent timing inherent to the URDIS data is a key element of the invention as without it the moving vehicle has great difficulty deciding which data identified by range sensors such as cameras and LiDAR will serve as effective landmarks.
[00073] The present invention iteratively adapts and determines an object's position by
individually assessing each sensor's positional determination. Figure 1 presents a high level block diagram of an Adaptive Positioning System according to one embedment of the present invention. The APS 100 includes a multimodal positional state estimator 120 which receives positional estimations from a plurality of positional sensors. In one embodiment of the present invention different positional sensors include: 1) range based estimation 130 such as GPS or UWB technology, as well as combinations of peer to peer (P2P) active ranging system such as P2P ultra-wideband radios, P2P ultra-low power Bluetooth, P2P acoustic ranging, etc.; 2) dead reckoning systems 160 using some combination of wheel encoders, inertial sensors, compasses and tilt sensors; 3) direct relative frame measurement 170 and optical feature-based positioning using some combination of lasers, cameras, stereo vision systems, multi-camera systems and IR/thermal imaging systems; 4) bearing based positional estimations 180 such as trilateration including optical and camera systems; 5) range and bearing estimations 140 such as LiDAR and SLAM; and 6) inertial sensing systems 150 using an inertial measuring unit.
[00074] Each of these, and other positional sensor systems, creates, and provides to multimodal state estimator 120, a unimodal estimation 190 of an object's position. According to one aspect of the present invention the APS concurrently maintains each of a plurality of unimodal position estimation to enable the APS to determine which, if any, of the sensor system estimations have failed.
[00075] Figure 2 is a graphic illustration of the positional estimation processes of the APS
according to one embodiment of the present invention. Figure 2 represent a positional estimation at an instant of time. As will be appreciated by one skilled in the art, the present invention's estimation of an object's position is iteratively and is continually updated based on determinations by each of a plurality of positional sensors. Figure 2 represents and APS having four positional sensors. Other versions may have more or fewer means to individually determine an object's position. Each of the sensors shown in figure two is represented by a unique geometric shape. For example, a GPS sensor system may be represented by a circle while a dead reckoning system a rectangle.
[00076] The APS of the present invention recognizes that an object's actual position 210 is almost invariably different than the position estimated by one or more positional sensors. In this case the object's actual position 120 is represented by a small dot surrounded by a triangle. The remaining geometric figures represent each sensor's unimodal estimation of the object's position. A first sensor 220, represented by a circle estimates the object's position 225 slight above and to the right of the actual position 210. A second sensor 230 shown as an octagon estimates the position slightly to the right of the actual position 210. Sensor number three 250 shown as a hexagon estimates the position of the object 255 left of is actual position 210 while the last sensor 240 is displaced right 245.
[00077] Unlike sensor fusion that would merge all of these estimates into a unified best fit or a combined estimate, the present invention maintains each unimodal estimation and thereafter analyzes the individual estimations to ascertain whether one or more of the sensors has failed in a multimodal field. And upon detection of a failure, that particular estimation is disregarded or degraded.
[00078] One aspect of the present invention is that the multimodal state estimation expects that the unimodal estimation derived from one or more sensors will fail. In the interest of clarity sensor failure is when the difference of the estimation of position by a sensor as compared to the other sensor estimates is greater than a predefined deviation limit or covariance. With each estimation by the plurality of sensors a degree of certainty is determined. If a particular sensor's estimation is, for example, two standard deviations apart from the expected position estimate then that sensor may be considered to have failed and its contribution to the positional estimate removed. One skilled in the relevant art will recognize that the deviation level used to establish sensor failure may vary and indeed may dynamically vary based on conditions.
[00079] The present invention identifies from various positional sensors different spatial
conditions or positional states or particles. By doing so the present invention uses a non- Gaussian state representation. In a Gaussian state, or a single state with sensor fusion, the uncertainty with respect to the object's position is uniformly distributed around where the system thinks the object is located. A Gaussian state, (also referred to as a normal or unimodal state distribution) merges the individual positional determinations to arrive at a combined or best guess of where the object is located. The present invention by contrast merges unimodal or Gaussian state estimation with a non-Gaussian state based on continuous multimodal, discrete binary (unimodal) positions that are compared against each other yet nonetheless remain separate. The present invention outputs a certainty value based on its ability to reconcile multiple modalities. This certainty value can also be used as an input to modify the behavior of an object, such as a robot, tasked to accomplish a specific goal. The object' s/robot's behavior is modified in light of the new information as it works to accomplish the goal. When a position estimation has a high degree of uncertainty the system directing the behavior of the object can recognize this and take specific behavioral action. Such modifications to the behavior of an object can result in the object slowing down, turning around, backing up, traveling in concentric circles, leaving an area where uncertainty is high or even remaining still and asking for help.
[00080] In one embodiment of the present invention, each of these unimodal estimations, each of which are arrived using a fusion of data collected by a plurality of sensors are treated as particles in a multimodal positional estimation state. A particle filter is then applied to determine a multimodal estimation of the position of the object.
[00081] Using a particle filter approach, there is something we desire to know. In this case we desire to know the position of an object. Continuing, we can measure something related to what we want to know. Here we can collect sensor data from a plurality of sensors to arrive at a plurality of positional estimations and we can measure the degree for each positional estimation to which each sensor agrees with the estimation. Thus we can measure the health or fitness of the sensor in its determination of a particular positional estimation. We also understand the relationship between what we want to know, the position of the object, and the measurement or the fitness of a sensor.
[00082] Particle filters work by generating a plurality of hypotheses. In our case each unimodal positional estimate is a hypothesis as to the position of the object. They can be randomly generated and have a random distribution. But we have from our unimodal estimations (with some uncertainly) the position of the object on a map.
[00083] For each particle, or unimodal estimation we can evaluate how likely it is to be the
correct position. Each particle or estimation can be given a weight or fitness score as to how likely it is indeed the position of our object. Of the plurality of particles, unimodal estimations, some are more likely than others to be accurate estimations of the position of the object. The unlikely particles or estimations are not of much use. New particles or estimations are generated but this time they are not random; they are based on the existing particles or estimations. Thus the particles are resampled in order to evolve most fit particles and still maintain uncertainty by letting a few less fit particles pass through every iteration of filter.
[00084] The new sample or generation of particles is based on a model of where we think the object is located or has moved. Again the weights or fitness of each particle (estimation) is updated and resampling occurs. The particles are again propagated in time using the model and the process repeats.
[00085] One embodiment of the present invention uses each positional state with a fitness score as a particle and thereafter applies particle filters. An algorithm places a score as to the fitness of each sensor's ability to estimate the position of the object by way of a particle. A state of a particle represents the position of (x, y, z, roll, pitch, yaw) of the object. Particle filters will spawn numerous (hundreds/thousands) of these particles which then individually estimate each new state when a new sensor reading is observed and then individually calculates their respective new states. With the information about new states, each particle is assigned weights by a specific cost criterion for that sensor and only the fittest particles survive an iteration. This approach allows multimodal state estimation where (as an example) 80% of amassed particles will contribute to the most certain position of the object while others can be at a different position. Hence, the density of these particles governs the certainty of the state the robot is in using a particle filter approach.
[00086] Particle filter methodology is often used to solve nonlinear filtering problems arising in signal processing and Bayesian statistical inference. The filtering problem consists in estimating the internal states in dynamical systems when partial observations are made, and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the conditional probability (a.k.a. posterior distributions) of the states of some Markov process, given some noisy and partial observations.
[00087] Particle filtering methodology uses a genetic type mutation-selection sampling approach, with a set of particles (also called individuals, or samples) to represent the posterior distribution of some stochastic process given some noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions.
[00088] One particular technique used by the present invention is a Rao-Blackwellized Particle
Filter (RBPF). RBPF is specific type of particle filter algorithm that allows integration of unimodal and multimodal type systems. According to one embodiment of the present
invention, a unique state set is defined for the particles of the filter (sensor failure modes are a state of the particle so that particle can predict failure modes and also drive down the weight for the particle to survive). In most cases, RBPF is used for estimating object state(s) using one type of sensor input, and can be used, as in the case of the present invention, with multiple types of sensors feeding into the same estimation system for tighter coupling and more robust failure detection of any particular sensor.
[00089] In addition to assessing the state of each positional sensor the present invention utilizes various schemes to enhance the multimodal positional estimation of an object. Figure 3 shows a high level architecture of one embodiment of the present invention. The invention utilizes, in one embodiment, a distributed positioning setup in which a multimodal module 330 receives inputs and updates from an onboard unimodal estimator 320. The unimodal estimator 320 receives separately positional estimations from each of a plurality of sensors 310. Using data received from the unimodal estimator 320 the multimodal estimator 330 can provide corrections to processing ongoing in the unimodal estimator 320. For example, if it determined by the multimodal estimator 330 that a GPS sensor is experiencing degraded accuracy due to multipath or interference the multimodal estimator 330 can convey such information to the unimodal estimator that RF reception generally appears degraded. Accordingly, the unimodal estimator may devalue or degrade the positional estimation of UWB or other sensors that similar in operation to the GPS sensor. This data is then used to update a sensor's probability of failure or degraded operation (defined a sensor "heatmap") from prior information for future position evaluations. Thus, each particle can use noisy sensor data to estimate its location using history from the sensor heatmap.
[00090] The presentation invention also uses range measurements to both moving and stationary positioning landmarks as long as the position of the landmark is known. One element of the invention is that even when no fixed landmark is within view (or perhaps not even present at all), the presence of moving landmarks (e.g. other cars and trucks, other robots, other mobile handheld devices) can serve to provide positioning references. Each of these can contribute to a coherent position estimate for the group. In this scheme each module/entity is essentially given a vote on its own position and each module can also
contribute to a collaborative assessment of the validity of other modules position estimates. The APS dynamically balances dependence on active range modules (i.e.
UWB active ranging tags) with the use of passive landmarks (i.e. RFID tags), and organic features (i.e. an actual natural landmark or obstacle) that can be perceived through use of LiDAR, cameras, radar, etc. APS can use all of these or any combination and provides a systematic means for combining the ranges to these various landmarks.
[00091] Each category of landmark has a filtering mechanism specific to that category. After the filtering is finished the value of each range estimate can be determined by comparing multiple estimates. There are multiple steps to ascertaining the value of each estimate: a) comparison to previous N recent range readings from the particular sensor (once adjusted as per recent motion); b) comparison to previous M recent range readings from the particular sensor category (once adjusted as per recent motion).; c) comparison between disparate landmark categories. The current invention provides a standardized means for incorporating all this disparate information without needing to modify the algorithm. The operational benefit is that a single system can utilize a spectrum of different landmarks depending on the environment, situation and the type of vehicle. Another advantage is that environmental obscurants or obstacles which interfere with one type of landmark (i.e. visual) will not interfere with others (UWB tag).
[00092] Another scheme can be using range based measurements to fixed but known position landmarks using 2D scans but preferably will involve depth imagery as it is much more useful for calculating position and especially orientation. Input for this category can come from 2D or 3D RADAR depth imagery, LiDAR 2D or 3D scans, 2D or 3D stereo vision data and any other means that provides a 2D or 3D depth image that can be used for positioning. Of these, the use of UWB depth imagery represents an important component to the APS and is innovative as a component to positioning in general. All of the depth imagery is filtered against previous depth imagery just as was the case for the range- based positioning (discussed in the previous section).
[00093] Within the depth imagery category, each sensor modality has an appropriate filtering mechanism tailored to that modality (i.e. LiDAR, UWB Radar, stereo vision, etc.). After
the filtering is finished a map matching algorithm is used to match the current scan into a semi-permanent 3D model of the local environment. The output is also fed into one or more separate map-matching modules that can then use the enhanced position estimate to detect change, based on contrasting the new scans with enhanced positions against the existing map. This is essentially a form of rolling spatial memory used to track motion and orientation on the fly, identify objects moving in the environment and calculate the validity of each new depth image. The validity of each new depth image can be determined in a number of ways: a) comparison to previous N recent depth scans from the particular sensor once adjusted as per recent motion; b) comparison to previous M recent depth scans from the particular sensor category (e.g. LiDAR, Stereo vision, UWB radar depth imagery sensor) once adjusted as per recent motion).; c) comparison of depth image to other modalities. Thus, the current invention provides a standardized means for incorporating all this disparate depth imagery without needing to modify the APS algorithm. The operational benefit is that a single system can utilize a spectrum of different landmarks depending on the environment, situation and the type of vehicle. Another advantage is that environmental obscurants or obstacles which interfere with the use of one depth scan (i.e. vegetation that obstructs a LiDAR) will not interfere with others (i.e. UWB radar depth imagery system can see through vegetation).
The use of map matching over multiple time steps allows APS to calculate changes in 2D or 3D motion and orientation in reference to the persistent 3D model. Also, in the face of positioning uncertainty, APS can evaluate positioning hypotheses within the temporal and spatial context of the ongoing 3D depth image. Thus, APS provides a way to combine depth imagery from different sensor modalities into a single approach. Just as multiple sensor modalities can produce peer to peer ranges within the APS (see previous section), it is also possible for multimodal 3D depth images to be incorporated by the APS system. The multimodal estimator can use range and bearing measurements of a LiDAR system simultaneously with landmark positional estimates from SLAM and from the UWB radar depth imagery. 2D or 3D "fiducially" ("fiducials" are defined as artificial targets selected and used based on their ability to be easily detected within the depth imagery) can be located in the environment can be used to further feature identification. Just as active tags provide a guaranteed means to do peer to peer ranging (see the
previous section), the fiducials provide a means to facilitate motion estimation and positioning based on the depth imagery.
[00095] Dead reckoning is a module within the APS scheme algorithms. This module is distinct from the map-matching and range-based modules but within the APS framework it is able to use the position and motion estimates outputted by the other modules in order to identify errors and improve accuracy. The dead reckoning module fuses and filters wheel encoders, inertial data and compass information to produce an estimate of motion and position. The dead-reckoning module's estimate of motion is usually excellent and updates more quickly and computationally more efficiently than any other module. However, the position estimate of the dead reckoning module, if used independent of the other modules, will drift indefinitely. Consequently, APS uses the motion output of the dead-reckoning module to fill in the temporal and spatial "gaps" which may occur between identified features and landmarks. It also may fill gaps between successful depth image scan matches. This need may occur when features are not available, such as in a wide open field. Also, the output of the dead reckoning module can be accessed by the algorithms for trilateration and map-matching described above. This allows those other modules to recognize certain kinds of sensor failure or erroneous position calculations that can sometimes occur if a landmark is replaced or misplaced or if multiple areas within the environment have similar depth characteristics that introduce uncertainty. Additional schemes can be developed based on sensor capability and characteristics to identify sensor failure and to recalibrate or optimize existing sensor estimations.
[00096] As illustrated above the multimodal estimation of the present invention assumes each sensor will, in varying conditions, fail. No sensor provides perfect information. Each sensor's output is an estimation of an object's actual position and the accuracy of that estimation varies. This is distinct from modeling the noise or inaccuracy of the sensor but rather when the estimation is position is simply incorrect. Thus there are conditions in which the sensor has failed and is providing an incorrect position even though there is such no indication. Thus a question becomes when has an estimation, which appears to be sound, failed.
[00097] The Multimodal approach of the present invention is a sustained belief that there are several acceptable positions at any one instance of time. The present invention maintains all positional beliefs until evidence eliminates one or more beliefs. Moreover, an iterative process continually reassesses each estimation and the correlation of each estimation to narrow down the options of position using, in one embodiment historical data related to sensor failure or degradation.
[00098] Figures 4 A and 4B provides a simple rendition of the multimodal estimator's ability to continually assess and resolve sensor failure. Figure 4 shows historical path of an object 410. The object's position is identified in 4 discrete positions by a plurality of sensors. Initially the position of the object 410 is estimated by the unimodal sensors to be within the depicted circle 415. Along the path exists landmarks or other features that enable one or more sensors to determine the object's position. In this case assume that 4 transmitters 420, 440, 445, 460 are positioned with known positions along the proposed path. At each of the four positions indicated in figure 4 the object receives from one or more of these transmitters range and bearing information. One skilled in the relevant art will recognize that in other sensor configurations range only may be provided or that the towers are objects which can be detected using an optical or laser sensor. As the object 410 moves from the first position 415 to the second position 425 the position estimation determined by the two towers is correlated with dead reckoning data. A similar occurrence exits as the object 410 moves from the second position 425 to the third position 435. And while in this depictions the positional estimations are compared at discrete positions, in operation the comparison of various sensor estimations it iterative and continual.
[00099] However, as the object moves from the third position 435 to the last position tension within the multimodal estimator is identified. The range and bearing information received from the upper most tower 445, the lower tower 460 and the dead reckoning information do not agree. Two possible solutions 465, 470 are identified which lay outside acceptable tolerances. According to one embodiment of the present invention historical data can assist in maintaining the heath of the system and to identify sensors that have failed and are thereafter devalued. For example, odometer information and the range/bearing information for the object 410 in the third position 435 agreed sufficiently that no tension
occurred. For the last position a third source or third positional estimation from the lower tower 460 conflicts with information received from other sensors. Said differently the unimodal estimations differ. The system of the present invention, using historical data, can more favorably consider the dead reckoning data and that from the upper tower 445 rather than the new information from the lower tower 460. Thus the unimodal observation based on data from the lower tower 460 is considered to have failed and is disregarded. Moreover, the present invention can assess historical data that may indicate that when the object is in its current position data from the lower tower 460 is unreliable. The APS system resolves that the upper final position 470 is a more accurate representation of the object's actual position. And the historical heatmap is updated based in this failed sensor.
[000100] Historical analysis of positional estimations can assist in the determination of whether a sensor or sensors have failed or are likely to fail if the object is moving toward a particular position. Turning back to the last example, if no historical data had been available each alternative final position 470, 465 would be equally likely. The present invention assesses that is the probability that a particular sensor will fail as well as what is the probability of sensor failure given that another sensor has failed in the past.
[000101] Figure 5 presents a high level flow chart for a methodology according to one embodiment of the present invention to combine unimodal and multimodal estimation to determine an object's positions. The Adaptive Positioning System of the present invention begins by receiving 510 sensor data from a plurality of sensors. Using that information, a unimodal estimation 520 is created of the object's state as well as a degree of uncertainly based on sensor measurements and historical data. For example, each type of sensor may have differing levels of accuracy or the ability to present an object's position. In addition, the certainty of the sensor to provide optimal data may be further reduced based on the object's currently position. Such as a GPS system in an urban environment. There is an inherent uncertainty to the GSP system's ability to provide the object's position and that uncertain is larger based on a historical understanding of its ability to perform in an urban setting.
[000102] The present invention thereafter evaluates 530 the fitness of each state estimation using multimodal techniques. One unique aspect of the present invention is to combine unimodal and multimodal positional estimation to provide an accurate and reliable positional estimation. This determination 540 of the most likely positional state of the object is gained by considering the fitness or health of each individual state estimation.
[000103] It will be understood by one of reasonable skill in the relevant art that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can, in one embodiment, be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine such that the instructions that execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a non-transitory computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed in the computer or on the other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
[000104] Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
[000105] As suggested above each sensor provides a prediction as to the position of the object.
Figure 6 is a flowchart of a methodology, according to one embodiment of the present invention, for predicting the state of an object using a unimodal estimator. As before the process begins with the object receiving 610 sensor data. Information from a plurality of positional sensors associated with the object can seek and gain information to ascertain the object's position.
[000106] Before a positional estimation is determined the APS predicts 620 where the object is likely to be located or its state using measurements models for each sensor and historical data. For example, a dead reckoning sensor system models the object's movement based on speed and time. If the object is, at its last observation, moving at 1 m/s and the new observation is one section later, the APS would predict that the object would have moved 1 meter.
[000107] With a prediction in hand the system estimates 630 the position and state of the object as well as any uncertainly that may exist. Turning back to the prior example, the APS expects the object to have moved 1 meter but the new observation estimates that the vehicle has moved 1.1 meters. Uncertainty exist whether the new estimation is correct or whether the prediction is correct. Thus for each positional estimation, historical data is updated 640 and used in future estimation.
[000108] Multimodal estimation ties in with the unimodal estimations of sensor data to provide a consensus of where the object is located. Figure 7 provides a basic graphical representation of a multimodal approach to adaptive positioning according to one embodiment of the present invention. The figure presents a plurality of unimodal positional estimations 710 as would be determined using the processes described herein. Each of the unimodal estimations may represents a positional estimate based on a variety of sensor systems. The upper line 720 is one representation of multimodal combination of the estimations. In this simple case the grouping of unimodal estimations where the multimodal curve peaks 730 makes it the most likely positional estimate.
[000109] As one of reasonable skill in the relevant art will appreciate this is a very simple
illustration of a multimodal approach to address adaptive positioning estimation.
However, as shown the flowchart that follows the basic concept present in Figure 7 is applicable. Figure 8 is a flowchart for one multimodal embodiment for positional estimation according to the present invention. In this rendition a partial filter is applied to determine a multimodal estimation of an object's position. The process begins with the creation 810 of an particle multimodal state. For each particle within the multimodal state evaluate 820 the particle fitness using the unimodal process.
[000110] The process then inquires 830 as to whether each particle's fitness is above a certain
threshold. From the particles that were established at the beginning M fit particles remain for multimodal estimation. The unfit particles are removed 890. The system also inquires if there are additional particles 880 to replace those that have been removed and by doing so the system will eventually gain fit particles for the multimodal estimation.
[000111] Sensor failure is identified 850 by comparing the fitness of particles remaining to those particles that have been removed. A particle is evaluated based a defined cost function that evaluates the fitness of a particle. This cost incurs from the deviation of the particle state from the "most fit" state of a particle in the current pool. These states are the pose of the vehicle (x, y, z, roll, pitch, yaw) and can also include sensor failure modes. For example, GPS will have a binary failure mode, fit or multi-path, thus if the unfit particle has predicted a state with GPD in multi-path while particles from most densely populated region (fit particles) do not match then that particle will have a lower probability of existence after this iteration.
[000112] With sensor failures identified a graph or map of sensor failure or sensor degradation is prepared. Historical information related to the failure or degradation of a sensor is updated 860 for future use with the unimodal or multimodal process. The particles of the multimodal estimation are thereafter reformed 870 to maintain fit particles for positional estimation.
[000113] This set of "fit" particles now predict the next state of the robot using information about the kinematics of the vehicle, previous state, sensor heatmap and historical tail from maps with landmarks as described above, then evaluate this prediction against new calculated
robot state when any new data is received from the plurality of sensors on board the system. Hence, in every iteration each particle is forced to predict a new state using known data and then evaluated for fitness when new sensor data is received.
[000114] In addition, the present invention also uses the performance of the particle filter to predict the correct state of the system to learn from its own failure. This information then can also be used to update the sensor heatmap for better future predictions.
[000115] Another aspect of the present invention is behavioral integration of sensor failure
probability. Certain positional sensors operate better than other in certain conditions. For example, a GPS estimation requires a clear line of sight between the receiver located on the object and four satellites orbiting overhead. The same is true with respect to trilateration from ranging transmitters. It is well known that GPS signals are degraded in canyons and urban environments. The primary reasons are either the lack of a clear, line-of-sight signal path due to obstructions, or else a condition known as "multipath". Multipath errors result from the reception of two or more instances of the same signal, with each instance having a different time of flight. The signal "bounces" off (i.e., is reflected from) terrain or structures. The receiver does not know what signal is truly a direct line of sigh reception or a reception of a signal that has been reflected and thus possesses a longer, incorrect time of flight. Based on the position certainty value, which is an output of the present invention, the behavior orchestration system can cue behaviors that address position uncertainty and also adjust behavior for safety and performance. When position certainty is low the system can initiate behaviors such as NeedToStop, AvoidAreaWithNoTags, AvoidAreaWithPoorPositionHistory,
SlowDownTillPositionKnown, SpinTillHeadingFixed,
RaiseTagToReestablishUWBConnection or FindNearestLandmark.
[000116] Similarly, a landscape devoid of discrete characteristics may render LiDAR or SLAM ineffective or at least degraded. One embodiment of the present invention uses historical degradation or sensor failure information to modify an object's path to optimize continued sensor success. Figure 9 is top view rendition of a proposed path of an object using the Adaptive Positioning System of the present invention that uses sensor heatmaps
generated using observations from unimodal and multimodal state estimators. The object is assigned the task to move from point A 910 to point B 990. Historically several areas between the two points has been identified as experiences sensor failure. In this case objects traversing the lower area 920 have experienced failure of one type of positional sensor. The area on the upper portion of the page represents a similar region of failure 940 but one that is associated with a different type of positional sensor. Lastly the third area 930 immediately above the staring point represents degradation or failure of yet a third type of sensor. Each area of sensor failure represents a gradient of risk. In this cased the center of the area has a higher likelihood of sensor failure than the outer boarders. One of reasonable skill in the relevant art will appreciate that each of the areas of risk many possess different gradients and levels of severity.
[000117] The most direct route from point A 910 to point B 990 is a straight line 925. Such a path, however, would take is directly through an area of known sensor failure. One aspect of the present invention is to integrate the impact of unimodal positional sensor failures based on historical or otherwise obtained risk data on mission parameters. In the present example, a route 925 fashioned between the areas of risk minimizes positional sensor failure. And while the selected route traverses the lower most area of risk 930 the present invention can assess the risk to the multimodal estimation of position as low. In this case since the risk of sensor failure occurs near the beginning of the path, dead reckoning and other sensors are still extremely accurate and thus mitigate the lost of, for example, UWB positional determination.
[000118] Figures 10 and 11 present flowcharts depicting examples of the methodology which may be used adaptively estimate the position of an object using multimodal estimation.
[000119] One methodology of the Adaptive Positioning System of the present invention begins
1005 with receiving 1010 sensor data from each of a plurality of positional sensors.
From the data the unimodal estimator determines 1015 an estimated position of an object for each sensor. Each of these positional estimations is maintained 1020 for each instantiation of a period of time. Environmental factors are identified 1030 and considered as the system correlates 1035 each determined position.
[000120] At this point the methodology inquires 1040 whether any of the positional determinations is outside of a predetermined correlation factor. If there are none that are outside of the predetermined correlation factor, the process returns to the beginning receiving new data 1010 and determining new positional estimates 1015. When a sensor is found to be outside of a correlation factor determination is made whether the sensor's estimate is degraded or the sensor has failed. In doing so a probability of failure is determined for each positional sensor to correctly identify 1045 one or more features used to make the positional determination. One of reasonable skill in the art will recognize that for each sensor the features that its uses to make a positional determination vary.
[000121] The method thereafter determines a probability of failure by the sensor to correctly
measure 650 a feature used to make a positional determination. Using these probabilities, the Adaptive Positioning System of the present invention, in this embodiment, identifies for each sensor whether sensor failure 1055 has occurred.
[000122] Responsive to a determination of the failure of a particular sensor 1060 the Adaptive
Positioning System of the present invention filters out that positional determination and eliminates its contribution to the overall assessment as to the object's estimated position. With the failed sensor removed from consideration the process begins anew and gain makes a determination as to whether this or other positional sensors have failed.
[000123] Figure 11 is a flowchart of another method embodiment for integration of the Adaptive
Positioning System of the present invention with an object's behavior. The process begins 1105 with the determination 1110 of a plurality of estimated positions based on a plurality of positional sensors. With this multimodal rendition of positional information historical sensor failure data is retrieved 1120 for each of the positional sensors.
[000124] The historical failure data is correlated with the current position of the object 1140 and with the mission objectives 1150. Based on the historical positional sensor failure data, the process concludes 1195 with the mission objective behavior being modified 1160 to minimize positional sensor failure while maintaining mission objectives.
[000125] Another embodiment of the present invention combines dead reckoning (e.g. wheel odometers, inertial data, etc. or a combination thereof) and high precision time-of-flight ranging transmitters to localize precisely in environments with sparse update rates and/or radio / transmitter density. According to one embodiment of the present invention an object determines a local frame of reference using dead reckoning along the path traveled through space and thereafter performs a best-fit optimization of the path to available range data based on a signal transmission / reception, predefined transmitter locations and a time stamp associated with the ranging transmission.
[000126] According to one embodiment, the location of the transmitters is known / defined a priori and each transmitter is uniquely identified by a time-of-flight conversation. When the object is within a predetermined distance of a transmitter's predefined location, based on an estimation of the object's position from dead reckoning, the transmitter initiates a time-of-flight conversation with that transmitter. The conversation is time stamped and paired with the dead reckoning local frame position. When a threshold number of conversations have taken place a non-linear least-squares optimization is run to update the transformation of the robot's local frame of reference to a global frame of reference. Each time-of-flight conversation defines a ranging sphere (shell) where the object at the corresponding time stamp could have been. A distance is defined from the local frame position estimate to the ranging sphere is defined (distance from position to beacon minus the range estimate) for each time stamped range-position pair. A non-linear, "least- squares" optimization is then run to find the dead reckoning path through the global frame with the least distance-based cost.
[000127] Uniquely identifiable ranging transmitters with a priori know locations are used as well as any number of dead reckoning tools such as compasses, wheel encoders, inertial sensors, gyroscopes, and fusion algorithms (e.g. extended Kalman filter). The dead reckoning sensors provide a short-term estimate of the relative offset of the vehicle in a local space, while the ranging transmitters are used to tie this space to a global frame. Techniques today typically require both high update rates and at least four ranging radios within proximity such that trilateration can take place. The invention disclosed herein
requires neither. Figure 12 a graphical depiction of combined dead reckoning and high precision time-of-flight ranging radio to identify an optimized path.
[000128] Figure 12 presents three paths. Path 1 1210 and path 3 1220 represent the possible limits of drift with respect to dead reckoning technology. These two outside paths are the boundaries between which the actual path exists. Within those boundary paths 1210, 1220 exists a plurality of radio transmitters 1240, 1250, 1260, 1270, 1280 with known locations. Using ranging information from a known transmitter the actual path 1230 can be refined. As the object enters a region in which reception from a transmission is likely, a time-of-flight range reading can occur to refine the actual location of the object. In this case a path refinement occurs 6 times so as to produce an accurate depiction of the true path 1230.
[000129] Current ranging-radio localization techniques have a difficult time scaling to multiple vehicles or persons in the same area because of bandwidth limits. Current technology requires upwards of 40Hz range updates from at least four ranging radios to give precise estimates of global frame motion. This alone is nearly at the update limit of the technology. This embodiment of the present invention reduces the number of required ranging radios (because trilateration techniques are not required) and significantly lowers required update rates. Such an improvement will allow the present invention to expand into environments with many more objects localized.
[000130] Another embodiment of the present invention allows for direct calculation of the drift of the odometry frame without first estimating the position of the robot with respect to the map. The design that follows describes the 2D case (X, Y, Phi) for this Kalman filter, but extension to the 3D case (X, Y, Z, Theta, Psi, Phi) should be obvious from the description.
[000131] Typically, robot localization involves trilateration as described herein from ranging
radios which are uniquely identifiable. As discussed above, robots generally have a form of odometry estimation on board wherein ranges and odometry can be fused using a Kalman filter. However, over large periods of time the vehicle odometry becomes
unreliable. A need therefore exists to bring odometry (aka dead reckoning) inline with that of the robot's actual position.
[000132] According to one embodiment of the present invention the transform begins with the classic Kalman filter form:
[000134] Recall that covariance is high so it can settle.
[000136] And the time-of-flight radio measurement is
[000138] And wherein the observation function is simply the distance from the infrastructure to the likely location of the object.
[000140] Thereafter the Kalman filter updates according to the classic EKF equations. The R
matrix is the single-valued variance of the range estimate. Multipath can be filtered out by discarding large variations, once the algorithm has settled.
[000141] Figure 13 is a high level block diagram of a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning. The mobile localization system 1300 includes a dead reckoning module 1310, a time-of-flight module 1320 and a localization module 1330. Each module is communicatively coupled to each other to establish a positional location of an object.
[000142] As previously discussed the dead reckoning module 1310 maintains a fundamental
localization estimation of the object based on odometry while the time-of-flight module 1320 receive a ranging signal from a UWB transceiver 1350 of a known location. In one embodiment of the present invention the time-of-flight module 1320 includes a list of known locations of a plurality of UWB transceivers 1350. Upon communication from the dead reckoning module 1310 that the object is within a predefined estimated range of a transceiver the time-of-flight module 1320 establishes one or more conversations with the transceiver to ascertain range information. After receiving several conversations range and bearing information can be determined which is thereafter passed to the localization module 1330 which refines the objects positional location.
[000143] Figure 14 is one method embodiment for mobile localization of an object using sparse time-of-flight data and dead reckoning. The method begins 1405 by creating 1410 a dead reckoning local frame of reference wherein the dead reckoning local frame of reference
includes an estimation of localization of the object with respect to known locations of one or more Ultra Wide Band transceivers. As the object moves along its path the method, using the dead reckon local frame of reference, determines 1420 when the estimation of location of the object is within a predetermine range of one or more of the Ultra Wide Band transceivers. Thereafter a "conversation" is initiated 1440 with at least one of the one or more of the Ultra Wide Band transceivers within the predetermined range. Upon receiving 1450 a signal from one or more UWB transceivers within the predetermined range, the object collects 1460 range data between the object and the one or more UWB transceivers. Using multiple conversations to establish accurate range and bearing information, the systems updates 1470 the object positional frame of reference based on the collected range data ending 1495 the process.
[000144] As suggested above and in a preferred embodiment, the present invention can be
implemented in software. Software programming code which embodies the present invention is typically accessed by a microprocessor from long-term, persistent storage media of some type, such as a flash drive or hard drive. The software programming code may be embodied on any of a variety of known media for use with a data processing system, such as a diskette, hard drive, CD-ROM, or the like. The code may be distributed on such media, or may be distributed from the memory or storage of one computer system over a network of some type to other computer systems for use by such other systems. Alternatively, the programming code may be embodied in the memory of the device and accessed by a microprocessor using an internal bus. The techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known and will not be further discussed herein.
[000145] Generally, program modules include routines, programs, objects, components, data
structures and the like that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention can be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may
also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. An exemplary system, shown in Figure 15, for implementing the invention a general purpose computing device 1500 such as the form of a conventional personal computer, a personal communication device or the like, including a processing unit 1510, a system memory 1515, and a system bus that communicatively joins various system components, including the system memory 1515 to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory generally includes read-only memory (ROM) 1520, random access memory (RAM) 1540 and a non-transitory storage medium 1530. A basic input/output system (BIOS) 1550, containing the basic routines that help to transfer information between elements within the personal computer, such as during start-up, is stored in ROM. The personal computer may further include a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk. The hard disk drive and magnetic disk drive are connected to the system bus by a hard disk drive interface and a magnetic disk drive interface, respectively. The drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the personal computer. Although the exemplary environment described herein employs a hard disk and a removable magnetic disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer may also be used in the exemplary operating environment. The computing system may further include a user interface 1560 to enable users to modify or interact with the system as well as a sensor interface 1580 for direct collections of sensor data and a transceiver 1570 to output the data as needed.
[000147] According to one embodiment of the present invention, a system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning can include,
• one or more transmitters;
• a dead reckoning module wherein the dead reckoning module creates a dead reckoning local frame of reference;
• a time-of-flight module wherein, responsive to the object being within range of one of the one or more transmitters based on the dead reckoning local frame of reference, the time-of-flight module receives a signal from the transmitter and collects range data between the object and the transmitter; and
• a localization module communicatively coupled to the dead reckoning module and the time-of-flight module, wherein the localization module updates the object positional frame of reference based on the collected range data.
[000148] Additional features of the above mentioned system can include,
• wherein each transmitter includes a unique identifier;
• responsive to the object being within a threshold distance of a predefined
transmitter location, initiating an exchange of signals with the transmitter;
• wherein the exchange of signals includes a time stamp associated with the
conversation and paired with the dead reckoning local frame of reference position;
• wherein the localization module performs a "best-fit" optimization path based on range data, the predefined transmitter location and the time stamp;
• wherein the best fit optimization is a "least-squares" optimization; and
• responsive to conducting a plurality of signal exchanges between the transmitter and the object, performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
[000149] According to one embodiment of the present invention, a method for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning can include the steps,
• creating a dead reckoning local frame of reference wherein the dead reckoning local frame of reference includes an estimation of localization of the object with respect to known locations of one or more Ultra Wide Band (UWB) transceivers;
• determining when the estimation of localization of the object is within a
predetermine range of one or more of the Ultra Wide Band transceivers;
• initiating a conversation with one of the one or more of the UWB transceivers within the predetermined range;
• receiving a signal from the one of the one or more UWB transceivers within the predetermined range;
• collecting range data between the object and the one of the one or more UWB transceivers within the predetermined range; and
• updating the object positional frame of reference based on the collected range data.
[000150] Additional features of this method can include,
• wherein each UWB includes a unique identifier;
• wherein the signal includes a time stamp associated with the conversation and paired with the dead reckoning local frame of reference;
• wherein updating includes performing a "best- fit" optimization path based on collected range data, known locations of the one or more Ultra Wide Band (UWB) transceivers and the time stamp;
• wherein the "best-fit" optimization is a "least-squares" optimization; and
• wherein updating includes performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
[000151] Embodiments of the present invention as have been herein described may be
implemented with reference to various wireless networks and their associated communication devices. Networks can also include mainframe computers or servers, such as a gateway computer or application server (which may access a data repository). A gateway computer serves as a point of entry into each network. The gateway may be coupled to another network by means of a communications link. The gateway may also be directly coupled to one or more devices using a communications link. Further, the gateway may be indirectly coupled to one or more devices. The gateway computer may also be coupled to a storage device such as data repository.
[000152] These and other implementation methodologies for estimating an object's position can be successfully utilized by the Adaptive Positioning System of the present invention.
Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention.
[000153] As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects are not
mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions, and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects of the invention can be implemented as software, hardware, firmware, or any combination of the three. Of course, wherever a component of the present invention is implemented as software, the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer
programming. Additionally, the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Claims
1. A system for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning, the system comprising:
one or more transmitters;
a dead reckoning module wherein the dead reckoning module creates a dead reckoning local frame of reference;
a time-of-flight module wherein, responsive to the object being within range of one of the one or more transmitters based on the dead reckoning local frame of reference, the time-of-flight module receives a signal from the transmitter and collects range data between the object and the transmitter; and a localization module communicatively coupled to the dead reckoning module and the time-of-flight module, wherein the localization module updates the object positional frame of reference based on the collected range data.
2. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 1, wherein each transmitter includes a unique identifier.
3. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 1, responsive to the object being within a threshold distance of a predefined transmitter location, initiating an exchange of signals with the transmitter.
4. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 3, wherein the exchange of signals includes a time stamp associated with the conversation and paired with the dead reckoning local frame of reference position.
5. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 1, wherein the localization module
performs a "best-fit" optimization path based on range data, the predefined transmitter location and the time stamp.
6. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 5, wherein the best fit optimization is a "least-squares" optimization.
7. The system for mobile localization of an object using sparse time-of-flight data and dead reckoning according to claim 1, wherein responsive to conducting a plurality of signal exchanges between the transmitter and the object, performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
8. A method for mobile localization of an object having an object positional frame of reference using sparse time-of-flight data and dead reckoning, the method comprising the steps:
creating a dead reckoning local frame of reference wherein the dead reckoning local frame of reference includes an estimation of localization of the object with respect to known locations of one or more Ultra Wide Band (UWB) transceivers;
determining when the estimation of localization of the object is within a predetermine range of one or more of the Ultra Wide Band transceivers;
initiating a conversation with one of the one or more of the UWB transceivers within the predetermined range;
receiving a signal from the one of the one or more UWB transceivers within the predetermined range;
collecting range data between the object and the one of the one or more UWB transceivers within the predetermined range; and
updating the object positional frame of reference based on the collected range data.
9. The method for mobile localization of an object according to claim 8, wherein each UWB includes a unique identifier.
10. The method for mobile localization of an object according to claim 8, wherein the signal includes a time stamp associated with the conversation and paired with the dead reckoning local frame of reference.
11. The method for mobile localization of an object according to claim 10, wherein updating includes performing a "best-fit" optimization path based on collected range data, known locations of the one or more Ultra Wide Band (UWB) transceivers and the time stamp.
12. The method for mobile localization of an object according to claim 11, wherein the "best-fit" optimization is a "least-squares" optimization.
13. The method for mobile localization of an object according to claim 8, wherein updating includes performing a non-linear, "least-squares" optimization to update the object positional frame of reference.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562169689P | 2015-06-02 | 2015-06-02 | |
US62/169,689 | 2015-06-02 | ||
US201662332234P | 2016-05-05 | 2016-05-05 | |
US62/332,234 | 2016-05-05 | ||
US15/149,064 US20170023659A1 (en) | 2015-05-08 | 2016-05-06 | Adaptive positioning system |
US15/149,064 | 2016-05-06 | ||
US15/170,665 | 2016-06-01 | ||
US15/170,665 US10365363B2 (en) | 2015-05-08 | 2016-06-01 | Mobile localization using sparse time-of-flight ranges and dead reckoning |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2016196717A2 true WO2016196717A2 (en) | 2016-12-08 |
WO2016196717A3 WO2016196717A3 (en) | 2017-05-04 |
Family
ID=57441694
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/035396 WO2016196717A2 (en) | 2015-06-02 | 2016-06-02 | Mobile localization using sparse time-of-flight ranges and dead reckoning |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016196717A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106842155A (en) * | 2017-01-17 | 2017-06-13 | 北京工业大学 | A kind of wireless fixed transmission source localization method based on space interpolation and cluster analysis |
CN108681487A (en) * | 2018-05-21 | 2018-10-19 | 千寻位置网络有限公司 | The distributed system and tuning method of sensing algorithm arameter optimization |
CN114526735A (en) * | 2022-04-24 | 2022-05-24 | 南京航空航天大学 | Method for determining only ranging initial relative pose of unmanned aerial vehicle cluster |
EP4033727A1 (en) * | 2021-01-26 | 2022-07-27 | Deutsche Telekom AG | Method for providing adapted positioning information towards at least one consuming application regarding a plurality of objects comprising at least one specific object, system or telecommunications network for providing adapted positioning information, positioning information consuming application, program and computer-readable medium |
CN117849818A (en) * | 2024-03-08 | 2024-04-09 | 山西万鼎空间数字有限公司 | Unmanned aerial vehicle positioning method and device based on laser radar and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7202776B2 (en) * | 1997-10-22 | 2007-04-10 | Intelligent Technologies International, Inc. | Method and system for detecting objects external to a vehicle |
US9168419B2 (en) * | 2012-06-22 | 2015-10-27 | Fitbit, Inc. | Use of gyroscopes in personal fitness tracking devices |
-
2016
- 2016-06-02 WO PCT/US2016/035396 patent/WO2016196717A2/en active Application Filing
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106842155A (en) * | 2017-01-17 | 2017-06-13 | 北京工业大学 | A kind of wireless fixed transmission source localization method based on space interpolation and cluster analysis |
CN106842155B (en) * | 2017-01-17 | 2020-02-07 | 北京工业大学 | Wireless fixed emission source positioning method based on spatial interpolation and cluster analysis |
CN108681487A (en) * | 2018-05-21 | 2018-10-19 | 千寻位置网络有限公司 | The distributed system and tuning method of sensing algorithm arameter optimization |
CN108681487B (en) * | 2018-05-21 | 2021-08-24 | 千寻位置网络有限公司 | Distributed system and method for adjusting and optimizing sensor algorithm parameters |
EP4033727A1 (en) * | 2021-01-26 | 2022-07-27 | Deutsche Telekom AG | Method for providing adapted positioning information towards at least one consuming application regarding a plurality of objects comprising at least one specific object, system or telecommunications network for providing adapted positioning information, positioning information consuming application, program and computer-readable medium |
US11946745B2 (en) | 2021-01-26 | 2024-04-02 | Deutsche Telekom Ag | Providing adapted positioning information towards at least one consuming application regarding a plurality of objects |
CN114526735A (en) * | 2022-04-24 | 2022-05-24 | 南京航空航天大学 | Method for determining only ranging initial relative pose of unmanned aerial vehicle cluster |
CN114526735B (en) * | 2022-04-24 | 2022-08-05 | 南京航空航天大学 | Method for determining initial relative pose of unmanned aerial vehicle cluster only by ranging |
CN117849818A (en) * | 2024-03-08 | 2024-04-09 | 山西万鼎空间数字有限公司 | Unmanned aerial vehicle positioning method and device based on laser radar and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2016196717A3 (en) | 2017-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10365363B2 (en) | Mobile localization using sparse time-of-flight ranges and dead reckoning | |
US20170023659A1 (en) | Adaptive positioning system | |
US11422253B2 (en) | Method and system for positioning using tightly coupled radar, motion sensors and map information | |
US11393216B2 (en) | Method of computer vision based localisation and navigation and system for performing the same | |
Wen et al. | GNSS NLOS exclusion based on dynamic object detection using LiDAR point cloud | |
US11506512B2 (en) | Method and system using tightly coupled radar positioning to improve map performance | |
CN106546977B (en) | Vehicle radar sensing and localization | |
EP3617749B1 (en) | Method and arrangement for sourcing of location information, generating and updating maps representing the location | |
Atia et al. | A low-cost lane-determination system using GNSS/IMU fusion and HMM-based multistage map matching | |
Bauer et al. | Using high-definition maps for precise urban vehicle localization | |
Atia et al. | Map-aided adaptive GNSS/IMU sensor fusion scheme for robust urban navigation | |
Groves et al. | The four key challenges of advanced multisensor navigation and positioning | |
US20220107184A1 (en) | Method and system for positioning using optical sensor and motion sensors | |
US9587951B1 (en) | Map matching methods and system for tracking and determining improved position of reference assets | |
WO2016196717A2 (en) | Mobile localization using sparse time-of-flight ranges and dead reckoning | |
Kealy et al. | Collaborative navigation as a solution for PNT applications in GNSS challenged environments–report on field trials of a joint FIG/IAG working group | |
Mattern et al. | Camera-based vehicle localization at intersections using detailed digital maps | |
Wang et al. | UGV‐UAV robust cooperative positioning algorithm with object detection | |
de Oliveira et al. | Recent advances in sensor integrity monitoring methods—A review | |
Georgy | Advanced nonlinear techniques for low cost land vehicle navigation | |
Rashed et al. | Leveraging FMCW-radar for autonomous positioning systems: Methodology and application in downtown Toronto | |
Aggarwal | GPS-based localization of autonomous vehicles | |
Volden et al. | Development and experimental evaluation of visual-acoustic navigation for safe maneuvering of unmanned surface vehicles in harbor and waterway areas | |
Shan et al. | A Survey of Vehicle Localization: Performance Analysis and Challenges | |
Maaref et al. | Optimal GPS integrity-constrained path planning for ground vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16804380 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase in: |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16804380 Country of ref document: EP Kind code of ref document: A2 |