WO2013140133A1 - Generating navigation data - Google Patents
Generating navigation data Download PDFInfo
- Publication number
- WO2013140133A1 WO2013140133A1 PCT/GB2013/050613 GB2013050613W WO2013140133A1 WO 2013140133 A1 WO2013140133 A1 WO 2013140133A1 GB 2013050613 W GB2013050613 W GB 2013050613W WO 2013140133 A1 WO2013140133 A1 WO 2013140133A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- data set
- experience data
- nodes
- experience
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims description 37
- 238000013519 translation Methods 0.000 claims description 6
- 230000014616 translation Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 description 16
- 230000004807 localization Effects 0.000 description 13
- 238000013459 approach Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000764238 Isis Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
- G01C11/08—Interpretation of pictures by comparison of two or more pictures of the same area the pictures not being supported in the same relative position as when they were taken
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B29/00—Maps; Plans; Charts; Diagrams, e.g. route diagram
- G09B29/003—Maps
- G09B29/004—Map manufacture or repair; Tear or ink or water resistant maps; Long-life maps
Definitions
- the present invention relates to generating navigation data.
- a Visual Odometry (VO) system continuously consumes the live image stream obtained by cameras onboard a vehicle.
- VO Visual Odometry
- a series of localisers attempt to localise each live frame in their own experience.
- epochs A and C both localisers successfully localise the frames in their experiences and so the VO output is forgotten.
- epoch B localisation is only successful in one saved experience (experience 2), which is deemed too few, so the VO output is saved in a new experience.
- the collection of all experiences is known as a plastic map.
- the plastic mapping system does not adopt a single frame of reference. When new experiences are stored, they live in their own frame of reference and only locally accurate.
- plastic mapping An important aspect of the inventors' previous work, referred to generally as plastic mapping, is its ability to relocalise in previously saved experiences. If this was not possible then even if the starting position in an experience was known then the experience would then only be useful until localisation failure, even though it may later be relevant. The failure could occur because of a short term change in visual appearance, or the vehicle taking a slight detour from the previous route.
- Figure 2 illustrates two live VO systems 202, 204 running and localising against a saved experience 206.
- a localisation failure 208 happens half way along.
- the vehicle takes a small detour 210.
- the second half of the saved experience 206 is still relevant to the live VO, but if re-localisation is not possible it will not be used.
- Embodiments of the present invention are intended to address at least some of the problems discussed above. Embodiments are intended to increase links between experiences in an offline step to generate navigation data that is useable by the vehicle when it next goes out so that it can better leverage its prior information and reduce unnecessary experience creation.
- a method of generating navigation data including or comprising: receiving a new experience data set relating to a new experience capture, a said experience data set including a set of nodes, each said node comprising a series of visual image frames taken over a series of time frames; receiving at least one stored experience data set relating to at least one previous experience capture; obtaining a candidate set of said nodes from the at least one stored experience data set that potentially matches a said node in the new experience data set, and checking if a said node in the candidate set matches the node in the new experience data set, wherein if a result of the checking is positive then data relating to the matched nodes is added to a place data set useable for navigation, the place data set indicating that said nodes in different said experience data sets relate to a same place.
- the new experience data set may comprise data received from a sensor operating online on a vehicle and steps of the method are performed subsequently offline, e.g. on a separate computing device.
- the step of obtaining the candidate set of nodes can include: finding any said node from the at least one stored experience data set that matches a said node in the new experience data set, the nodes both corresponding to a same time reference k, and adding a said node from the stored experience data set with a time reference k+1 to the candidate set.
- the step of obtaining the candidate set of nodes can include: finding any said node from the place data set that matches a said node in the new experience data set, and adding a neighbouring said node from the place data set to the candidate set.
- the step of obtaining the candidate set of nodes can include: finding any said node from the at least one stored experience data set that potentially matches a said node in the new experience data set according to an image matching algorithm, and adding the found node to the candidate set.
- the image matching algorithm may comprise FAB-MAP.
- the step of obtaining the candidate set of nodes can include: finding any said node from the at least one stored experience data set that potentially matches a said node in the new experience data set according to geographical location indicators, and adding the found node to the candidate set.
- a said geographical location indicator may be based on geographical data produced during capture of the experience data, e.g. using a Global Positioning System device.
- the step of checking if a said node in the candidate set matches the node in the new experience data set can include: creating a window of nodes ( ⁇ m c ⁇ w ) either side of a said node in the candidate set; computing transforms from the node (m * ) in the new experience data set to each said node in the window; if all the transforms are valid then computing translations from the node
- a said experience data set may further include data relating to landmarks and the method may further include: computing a transform from a said image frame of a said node in the new experience data set to said nodes in the stored experience data set having landmarks from a region surrounding a previous position of the node in the new experience data set.
- the step of computing the transform can be performed in parallel for a plurality of said image frames in the new experience data set.
- the method may include: identifying said landmarks in a said experience data set using a FAST corner extractor technique, and using a Brief Robust Independent Elementary Features technique to associate corresponding said landmarks in different said experience data sets.
- the place data set may be used by a navigation device of a vehicle.
- the vehicle may be at least partially autonomous.
- apparatus configured to generate navigation data, the apparatus including: a device configured to receive a new experience data set relating to a new experience capture, a said experience data set including a set of nodes, each said node comprising a series of visual image frames taken over a series of time frames; a device configured to receive at least one stored experience data set relating to at least one previous experience capture; a device configured to obtain a candidate set of said nodes from the at least one stored experience data set that potentially matches a said node in the new experience data set, and a device configured to check if a said node in the candidate set matches the node in the new experience data set, wherein if a result of the checking is positive then data relating to the matched nodes is added to a place data set useable for navigation, the place data set indicating that said nodes in different said experience data sets relate to a same place.
- computer program elements comprising: computer code means to make the computer execute methods substantially as described herein.
- the element may comprise a computer program product.
- Figure 2 illustrates cases of unnecessary experience creation in the plastic mapping approach
- Figure 3 is a schematic illustration of a vehicle and a remote processing device configured to generate navigation data according to an embodiment of the present invention
- Figure 4 illustrates how topological places can aid experience-driven re- localisation
- Figure 5 is a flowchart showing steps performed by a processing device configured to generate navigation data
- Figures 6 to 9 are graphs illustrating example system performance.
- a plastic map, PM is made up of j experiences, denoted Each experience is the saved output from the
- Each experience can be represented graphically, where the stereo frames are represented as nodes and metric transformation information can describe how they are connected.
- Separate experiences can be formed into a single larger graph G, which contains all experiences. Inter-experience edges can then be added between experiences to indicate the nodes refer to the same physical place in the world.
- Each associated image is of the same place in the environment, so edges can be created between them.
- Figure 3 shows schematically a VO system comprising at least one sensor 302 connected to a computing device 304 on board a vehicle 306.
- the sensor(s) in the illustrated embodiment may be a stereoscopic camera, although it will be appreciated that alternative (active or passive) sensors, such as a laser scanner, could be used.
- the vehicle travels along a surface and the sensor captures visual image frames of the territory around the vehicle, including the surface and any surrounding objects or landmarks.
- the image frames are transferred to the on board computing device 304 and may be at least temporarily stored by it and are typically indexed by time frames.
- the example vehicle is a land-based vehicle, it will be appreciated that in alternative embodiments, the vehicle could be water or air borne.
- the computing device 304 further includes a communications interface 307 that allows it to exchange data with a remote computing device 310, which includes a processor 312, memory 314 and its own communications interface 316.
- the image data may be transferred directly from the vehicle/sensor(s) 302 to the remote computing device.
- the image frames can be used to create data representing an experience ⁇ , as discussed above.
- the memory includes data 320 representing previously captured experiences, as well as the new experience data 321 and an application 322 configured to process experience data in order to produce enhanced navigation data that can be used in future, e.g. by a computing device on board the survey vehicle 306 (or the navigation system of any other at least partially autonomous vehicle that may traverse the same location in future).
- each node represents the associated stereo frame Fk and is linked to node by a 6 degree of freedom transform
- the VO system runs continuously on the live frame stream.
- a new experience is created and the output from the VO system is stored in this experience, can therefore comprise a chain of camera nodes, the inter-node transforms and associated 3D features. Nodes in experiences are referred to as
- each localiser runs over a saved experience, given a live frame Fk, its task is to calculate the transformation from the frame to a node in the experience. It operates in a similar way to the live VO system except the proposed landmark set comes from the saved experience, not the previous frame Fk-i.
- the landmarks are taken from the local region surrounding the previous position in the experience.
- a landmark may be considered to be a surrounding one by appearing in a fixed window size either side of the previous position in terms of frame number (it will be understood that such frames can contain features/landmarks that can be an arbitrary distance away).
- the localiser does not attempt to add or update landmarks in either the current VO output or the saved experience. It is completely passive in terms of its impact on both.
- the plastic map stores many experiences that cover an unbounded spatial area, in addition to capturing different appearances of the same area, thus they will not all be relevant all the time. As experiences are not stored in a single global frame of reference it is not possible to integrate the local transforms to estimate the position in one from another. In practice these problems can be considered to be the same, given a new frame F k and a set of localisers that were successful for frame F k -i, which localisers should be used for F k . The previous set of successful localisers is a good start, but new localisers that are appropriate should be included.
- Figure 4 shows how places can aid experience driven re-localisation.
- two experiences 402, 404 being successfully localised in when processing a VO feed 405.
- the inventors therefore propose a new system for increasing the number of places in an offline set, which can lead to better connected experiences, which can be more easily utilised on the next visit. Given a new experience it is desirable to maximally connect it to all previous experiences that are relevant. In one embodiment this is done in a two stage algorithm. For each node, a candidate set of nodes from at least one previous experience is first presented that might be the same place. A robust localisation is then used to ensure they really are the same place. This is shown using the above notation in the algorithm below. If a suitable match has been found then the appropriate place P z is updated.
- FIG. 5 illustrates schematically an example of this method.
- data representing a new experience recorded by the vehicle 306 is received, e.g. data that has been transmitted wirelessly from the vehicle to the remote computing device 310, on which the application 322 is executing the method.
- the format and presentation of the data can vary.
- data representing at least one stored experience is received by the computing device 310, e.g. retrieved from its memory 314. This data will normally, but not necessarily, be in the same format as the new experience data.
- the application 322 will also use place data that includes links between nodes in more than one of the experiences in the stored experience data that are known to be the same place. This place data may be part of the known experience data or may be a separate data structure/file.
- the intention of the application 322 is to find links between nodes in the stored experience data and nodes in the new experience data. Matching a node from a new experience to all previous nodes is very expensive and error prone due to the very large and mostly unrelated previous nodes. Therefore, it is desirable to provide a candidate set of nodes to the robust matcher (step 508 below) that are nearly in the right place and mostly relevant, in order to reduce computation time and onus on the robust matcher.
- a set of candidate nodes from the stored experience data is selected on the basis that they may match the node in the new experience data (this new experience data node may initially be the first node in the data set or may be initially selected in some other way).
- Candidate matching nodes can be selected in one or more ways.
- the application 322 then aims to ensure that a link is being created between two nodes that really are the same place.
- a robust matching technique is used to reduce incorrect links. Given new node m* and a candidate node a small window (e.g. +1-25 frames either side of the candidate node) of nodes is taken either side of denoted and the transform from m* to each node in the window is computed. Next, assuming all transforms are valid, the translation from m * to each is computed. If a local minima is found, i.e. m * really does pass by the window then the candidate node in ⁇ m c ⁇ w with the smallest translation to m * taken to be the same place.
- the associated data place is then updated at step 510. If none of the nodes in the candidate set match the new experience node then another set of candidate nodes may be produced (return to step 506), or another new experience node may be selected for processing as set out above (step 512).
- a key property of the system is that once the set of relevant localisers has been computed, localisation of the current live frame in each one is independent and so can be run in parallel. Given that the data association and trajectory estimation steps dominate the computation time, by parallelising them it is possible to process frames at 15Hz. To achieve robust data association Binary Robust Independent Elementary Features (BRIEF) descriptors (see M. Calonder, V. Lepetit, C. Strecha, and P.
- BRIEF Binary Robust Independent Elementary Features
- Fua "BRIEF: Binary Robust Independent Elementary Features," in European Conference on Computer Vision, September 2010) can be used as these features are very fast to compute and match and only require a CPU.
- Feature extraction on the incoming frame is independent and can be performed once at the start.
- the dependancy on a GPU makes parallelisation difficult, compared to running a CPU only program on a multi- core or multi-process system.
- One embodiment of the VO system uses the FAST corner extractor (see E. Rosten, G. Reitmayr, and T. Drummond, "Real-time video annotations for augmented reality,” in Advances in Visual Computing. LNCS 3840, December 2005, pp. 294-302) to compute points of interest. These are then passed to BRIEF descriptors to achieve fast robust data association. Matched landmarks can be refined to sub-pixel precision using efficient second-order matching (as described in C. Mei, S. Benhimane, E. Malis, and P. Rives, "Efficient homography based tracking and 3-d reconstruction for single-viewpoint sensors," IEEE Transactions on Robotics, vol. 24, no. 6, pp. 1352-1364, Dec.2008).
- the 6 Degrees of Freedom (DoF) transform is computed by an initial RANSAC estimation step and refined using a least squares optimisation.
- the 6 DoF transforms tk computed by the VO system when compared to the same relative transforms computed by the vehicle INS (which is assumed to be ground truth) have a mean error of [-0.0093,-0.0041 ,-0.0420] meters and [-0.0239, 0.0021 , 0.0040] degrees and standard deviation of [0.0225, 0.0245, 0.0155] meters and [0.0918, 0.0400, 0.0383] degrees.
- a test survey vehicle performed 53 traverses of two semi-overlapping 0.7km routes around Begbroke Science Park, Oxfordshire, United Kingdom. Data was collected over a three month period at different times of day and with different weather conditions using the survey vehicle. An outer loop around the site was driven on the first 47 traverses while the last 6 traverses went via an inner loop. For illustrative purposes, the signal from the external loop closer was controlled so it only fired at 14 predefined points on each loop. The points were spaced approximately evenly along each loop.
- the above numbers are the fraction of the maximum amount of experience data that could be saved, i.e. if all VO output was stored. As experience creation is driven by localisation failure, a lower number indicates that the system is performing better, i.e. it is localised for longer and is lost less.
- Performing Refined Discovery results in a 27% and 20% improvement for N 1 and 2, respectively, when compared to the previous plastic mapping work, Live Discovery. GPS Discovery did not perform as well as Refined Discovery. This is likely to be caused by the quality of the inter-experience edges in G being substandard due to the drift experienced by GPS.
- Figure 9 shows, for each visit, the average number of successful localisers while the system is not lost. It can be seen that both variants that process newly saved experiences between outings generally record a higher number of successful localisers for each run. Given they also store less experiences, this implies they are making much better use of the information they already have stored.
- the GPS and Refined Discovery variants both spend time after a run to perform better matching of newly saved experiences and had a higher number of successful localisers. By increasing the quantity and quality of edges in G, it has been shown that it is possible to stay localised for longer and less experience data needs to be saved, as better use is made of the current information.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Business, Economics & Management (AREA)
- Educational Technology (AREA)
- Educational Administration (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Navigation (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2013237211A AU2013237211B2 (en) | 2012-03-19 | 2013-03-13 | Generating navigation data |
BR112014023197-4A BR112014023197B1 (en) | 2012-03-19 | 2013-03-13 | NAVIGATION DATA GENERATION METHOD, APPLIANCE CONFIGURED TO GENERATE NAVIGATION DATA, AND, NON TRANSIENT COMPUTER-READABLE MEDIA |
US14/386,278 US9689687B2 (en) | 2012-03-19 | 2013-03-13 | Generating navigation data |
EP13713477.1A EP2828620B1 (en) | 2012-03-19 | 2013-03-13 | Generating navigation data |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1204750.2A GB2500384B (en) | 2012-03-19 | 2012-03-19 | Generating and using navigation data |
GB1204750.2 | 2012-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013140133A1 true WO2013140133A1 (en) | 2013-09-26 |
Family
ID=46052126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2013/050613 WO2013140133A1 (en) | 2012-03-19 | 2013-03-13 | Generating navigation data |
Country Status (6)
Country | Link |
---|---|
US (1) | US9689687B2 (en) |
EP (1) | EP2828620B1 (en) |
AU (1) | AU2013237211B2 (en) |
BR (1) | BR112014023197B1 (en) |
GB (1) | GB2500384B (en) |
WO (1) | WO2013140133A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2533295A (en) * | 2014-12-15 | 2016-06-22 | The Chancellor Masters And Scholars Of The Univ Of Oxford | Localising portable apparatus |
US9689687B2 (en) | 2012-03-19 | 2017-06-27 | The Chancellor Masters And Scholars Of The University Of Oxford | Generating navigation data |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2573247C1 (en) * | 2014-09-02 | 2016-01-20 | Открытое акционерное общество "Завод им. В.А. Дегтярева" | Hardware-software complex |
DE102015006014A1 (en) * | 2015-05-13 | 2016-11-17 | Universität Bielefeld | Soil cultivation device and method for its navigation and swarm of tillage equipment and methods for their joint navigation |
DE102016211805A1 (en) * | 2015-10-09 | 2017-04-13 | Volkswagen Aktiengesellschaft | Fusion of position data using poses graph |
CN109974707B (en) * | 2019-03-19 | 2022-09-23 | 重庆邮电大学 | Indoor mobile robot visual navigation method based on improved point cloud matching algorithm |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5961571A (en) * | 1994-12-27 | 1999-10-05 | Siemens Corporated Research, Inc | Method and apparatus for automatically tracking the location of vehicles |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5144685A (en) * | 1989-03-31 | 1992-09-01 | Honeywell Inc. | Landmark recognition for autonomous mobile robots |
DE69631458T2 (en) * | 1995-10-04 | 2004-07-22 | Aisin AW Co., Ltd., Anjo | Car navigation system |
US6826472B1 (en) * | 1999-12-10 | 2004-11-30 | Tele Atlas North America, Inc. | Method and apparatus to generate driving guides |
CN1306251C (en) * | 2001-03-21 | 2007-03-21 | 三洋电机株式会社 | Navigator |
US8538803B2 (en) * | 2001-06-14 | 2013-09-17 | Frank C. Nicholas | Method and system for providing network based target advertising and encapsulation |
KR100703444B1 (en) * | 2003-06-03 | 2007-04-03 | 삼성전자주식회사 | Device and method for downloading and displaying a images of global position information in navigation system |
US7383123B2 (en) * | 2003-06-03 | 2008-06-03 | Samsung Electronics Co., Ltd. | System and method of displaying position information including an image in a navigation system |
US8289390B2 (en) * | 2004-07-28 | 2012-10-16 | Sri International | Method and apparatus for total situational awareness and monitoring |
JP2007108257A (en) * | 2005-10-11 | 2007-04-26 | Alpine Electronics Inc | Map mobile device |
CN101046387A (en) * | 2006-08-07 | 2007-10-03 | 南京航空航天大学 | Scene matching method for raising navigation precision and simulating combined navigation system |
US8855819B2 (en) * | 2008-10-09 | 2014-10-07 | Samsung Electronics Co., Ltd. | Method and apparatus for simultaneous localization and mapping of robot |
US7868821B2 (en) * | 2009-01-15 | 2011-01-11 | Alpine Electronics, Inc | Method and apparatus to estimate vehicle position and recognized landmark positions using GPS and camera |
US8374390B2 (en) * | 2009-06-24 | 2013-02-12 | Navteq B.V. | Generating a graphic model of a geographic object and systems thereof |
US20140172864A1 (en) * | 2011-07-08 | 2014-06-19 | Annie Shum | System and method for managing health analytics |
GB2500384B (en) | 2012-03-19 | 2020-04-15 | The Chancellor Masters And Scholars Of The Univ Of Oxford | Generating and using navigation data |
-
2012
- 2012-03-19 GB GB1204750.2A patent/GB2500384B/en not_active Expired - Fee Related
-
2013
- 2013-03-13 EP EP13713477.1A patent/EP2828620B1/en active Active
- 2013-03-13 AU AU2013237211A patent/AU2013237211B2/en active Active
- 2013-03-13 BR BR112014023197-4A patent/BR112014023197B1/en active IP Right Grant
- 2013-03-13 US US14/386,278 patent/US9689687B2/en active Active
- 2013-03-13 WO PCT/GB2013/050613 patent/WO2013140133A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5961571A (en) * | 1994-12-27 | 1999-10-05 | Siemens Corporated Research, Inc | Method and apparatus for automatically tracking the location of vehicles |
Non-Patent Citations (1)
Title |
---|
M. CUMMINS ET AL: "FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance", THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, vol. 27, no. 6, 1 June 2008 (2008-06-01), pages 647 - 665, XP055066085, ISSN: 0278-3649, DOI: 10.1177/0278364908090961 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9689687B2 (en) | 2012-03-19 | 2017-06-27 | The Chancellor Masters And Scholars Of The University Of Oxford | Generating navigation data |
GB2533295A (en) * | 2014-12-15 | 2016-06-22 | The Chancellor Masters And Scholars Of The Univ Of Oxford | Localising portable apparatus |
WO2016097690A1 (en) * | 2014-12-15 | 2016-06-23 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising portable apparatus |
AU2015365714B2 (en) * | 2014-12-15 | 2019-08-01 | Oxa Autonomy Ltd | Localising portable apparatus |
US10436878B2 (en) | 2014-12-15 | 2019-10-08 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising portable apparatus |
Also Published As
Publication number | Publication date |
---|---|
BR112014023197B1 (en) | 2021-10-19 |
GB2500384A (en) | 2013-09-25 |
US20150057921A1 (en) | 2015-02-26 |
EP2828620B1 (en) | 2020-09-09 |
AU2013237211B2 (en) | 2016-01-07 |
GB201204750D0 (en) | 2012-05-02 |
US9689687B2 (en) | 2017-06-27 |
BR112014023197A2 (en) | 2020-07-07 |
GB2500384B (en) | 2020-04-15 |
AU2013237211A1 (en) | 2014-10-02 |
EP2828620A1 (en) | 2015-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11313684B2 (en) | Collaborative navigation and mapping | |
JP7472235B2 (en) | DEVICE AND METHOD FOR SIMULTANEOUS LOCALIZATION AND MAPPING - Patent application | |
WO2021035669A1 (en) | Pose prediction method, map construction method, movable platform, and storage medium | |
AU2013237211B2 (en) | Generating navigation data | |
KR102053802B1 (en) | Method of locating a sensor and related apparatus | |
US8259994B1 (en) | Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases | |
JP2019518222A (en) | Laser scanner with real-time on-line egomotion estimation | |
CN112556685B (en) | Navigation route display method and device, storage medium and electronic equipment | |
EP3612799A1 (en) | Distributed device mapping | |
CN113190120B (en) | Pose acquisition method and device, electronic equipment and storage medium | |
Chiu et al. | Precise vision-aided aerial navigation | |
CN115267796B (en) | Positioning method, positioning device, robot and storage medium | |
Churchill et al. | Experience based navigation: Theory, practice and implementation | |
Vemprala et al. | Monocular vision based collaborative localization for micro aerial vehicle swarms | |
Moussa et al. | A fast approach for stitching of aerial images | |
Alliez et al. | Real-time multi-SLAM system for agent localization and 3D mapping in dynamic scenarios | |
CN112184906A (en) | Method and device for constructing three-dimensional model | |
Jin et al. | Multiway Point Cloud Mosaicking with Diffusion and Global Optimization | |
Park et al. | All-in-one mobile outdoor augmented reality framework for cultural heritage sites | |
CN115615436A (en) | Multi-machine repositioning unmanned aerial vehicle positioning method | |
CN113421332B (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
Chakraborty et al. | Jorb-slam: A Jointly Optimized Multi-Robot Visual Slam | |
Hu et al. | Accurate fiducial mapping for pose estimation using manifold optimization | |
Indelman et al. | Real-time mosaic-aided aerial navigation: I. motion estimation | |
Chien et al. | Substantial improvement of stereo visual odometry by multi-path feature tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13713477 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013713477 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14386278 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2013237211 Country of ref document: AU Date of ref document: 20130313 Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014023197 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112014023197 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140918 |