DE112011102132T5 - Method and device for image-based positioning - Google Patents

Method and device for image-based positioning

Info

Publication number
DE112011102132T5
DE112011102132T5 DE112011102132T DE112011102132T DE112011102132T5 DE 112011102132 T5 DE112011102132 T5 DE 112011102132T5 DE 112011102132 T DE112011102132 T DE 112011102132T DE 112011102132 T DE112011102132 T DE 112011102132T DE 112011102132 T5 DE112011102132 T5 DE 112011102132T5
Authority
DE
Germany
Prior art keywords
image
capture device
platform
location
image capture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
DE112011102132T
Other languages
German (de)
Inventor
Omar Pierre Soubra
Sy Bor Wang
Hongbo Teng
Gregory C. Best
Bruno M. Scherzinger
Peter Glen France
James M. Janky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Trimble Inc
Original Assignee
Trimble Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US35842310P priority Critical
Priority to US61/358,423 priority
Application filed by Trimble Inc filed Critical Trimble Inc
Priority to PCT/US2011/041590 priority patent/WO2011163454A1/en
Publication of DE112011102132T5 publication Critical patent/DE112011102132T5/en
Application status is Pending legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/14Measuring arrangements characterised by the use of optical means for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/02Means for marking measuring points
    • G01C15/06Surveyors' staffs; Movable markers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/51Relative positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00664Recognising scenes such as could be captured by a camera operated by a pedestrian or robot, including objects at substantially different ranges from the camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Abstract

An image-based positioning method and apparatus comprising capturing a first image (126) with an image capture device (34), the first image comprising at least one article. When moving the platform and capturing a second image with the image capture device, the second image comprises at least one article. Detecting in the first image an image of a surface; Capture in the second image of a second image of the surface. Processing the plurality of images of the object and the surface using a combined feature-based process and surface tracking process to track the surface location (134). Ultimately, determining the platform location by processing the combined feature-based process and surface-based process (138).

Description

  • TECHNICAL AREA
  • The technology relates to the field of navigation.
  • GENERAL PRIOR ART
  • US Serial No. 12 / 313,560 (hereinafter "Scherzinger") was directed to a system and method for obtaining precise position data with high measurement accuracy.
  • SUMMARY
  • This summary presents a selection of concepts that will be discussed further below in the Detailed Description. This summary is not intended to identify key or essential features of the claimed subject matter, nor to assist in determining the scope of the claimed subject matter.
  • An image-based positioning method is provided which overcomes limitations of the prior art.
  • DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the technology and, together with the description, serve to explain the principles set forth below:
  • 1 shows a GIE surveying apparatus disclosed in Scherzinger.
  • 2 FIG. 12 illustrates an image-based positioning device of the present technology that includes an image capture device configured to capture at least one image that includes at least one item, a feature-based process, and a location determination process.
  • 3 represents a feature-based process of 2 of the present technology performed using the georeferenced image-based process.
  • 4 Figure 4 illustrates a photogrammetric method of finding a distance to a camera from a known distance between 2 points (scale factor) and a pixel conversion to an opposite angle.
  • 5 FIG. 4 illustrates a flowchart in which the implementation steps of the feature-based process of FIG 2 using the georeferenced item image database and the image processing engine of 3 to be discribed.
  • 6 Figure 4 shows a device for georeferenced image-based positioning for the purposes of the present technology including a jamb GPS receiver including the same-post mounted camera whose optical center is aligned with the post's axis and the GIS / survey data collector.
  • 7 illustrates a computer system used to activate the image processing engine of 3 is configured for purposes of the present technology.
  • 8th FIG. 12 shows an image-based positioning device that incorporates the dual-feature-based tracking process for the purposes of the present technology.
  • 9 FIG. 12 shows the image-based positioning device including the feature and surface tracking process for purposes of the present technology.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to the embodiments of the technology; related examples are illustrated in the accompanying drawings. The present technology will be described in conjunction with various embodiments, it being understood that they are not intended to limit the present technology to these embodiments. On the contrary, the present technology is intended to cover alternatives, modifications, and equivalents, which may be included within the spirit and scope of the various embodiments defined by the appended claims.
  • Furthermore, in the following detailed description, numerous specific details are set forth for a thorough understanding of the presented embodiments. However, it will be apparent to one of ordinary skill in the art that the presented embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure the aspects of the present embodiments.
  • I. "Scherzinger"
  • 1 shows the GIE survey instrument disclosed in "Scherzinger" 10 ,
  • II. Persecution process for individual objects
  • In one embodiment, the present technology is illustrated 2 the image-based positioning device 30 containing an image capture device 34 which is configured to capture at least one image comprising at least one article; a positioning process 38 and a feature-based process 36 configured to process at least one image to track the location of at least one detected item.
  • In one embodiment of the present technology, an article may have one or more features; a feature is basically a part of the image that can be recognized by an algorithm. These can be points, areas or contours or abstract texture areas or anything else. For many of the algorithms discussed herein, there is also the premise that features can be detected across multiple images (found equivalents), but this is not part of the definition of a feature. Finding correspondences is a feature-based process, not a characteristic of the same.
  • In one embodiment of the present technology, the image capture device 34 come from a group consisting of: a digital camera; a digital video camera; a digital camcorder; a stereo digital camera; a stereo video camera; a movie camera; a depth camera; and a TV camera or similar.
  • Still referring to 2 includes the image-based positioning device 30 in an embodiment of the present technology, further platform 32 ,
  • In one embodiment of the present technology includes platform 32 also a rover.
  • In one embodiment of the present technology includes platform 32 also a Rover RTK system.
  • In one embodiment of the present technology includes platform 32 also a GIS / mapping handset.
  • Still referring to 2 In one embodiment of the present technology, the coordinates of a reference position of the image capture device become 34 (or a platform 32 ) through a GNSS positioning process 38 certainly.
  • A Global Navigation Satellite System (GNSS) process can be selected from among the group consisting of: a GPS process; a GLONASS process; a combined GPS / GLONASS Process; a GALILEO process; and a COMPASS (Beidou Navigation System) process; an earth-based pseudolite process or the like.
  • The Global Positioning System (GPS) is a satellite signal transmitter system that transmits information that can be used to determine the current location of an observer and / or the time of observation. The GPS was developed by the US Department of Defense (DOD) as part of its satellite program NAVSTAR.
  • Still referring to 2 In one embodiment of the present technology, as an alternative to a GNSS process, or in the case of unavailable or degraded satellite signals, the coordinates of a reference position of the image capture device 34 (or a platform 32 ) is determined by a feature-based process selected from the group; consisting of: inertial dead reckoning, simultaneous localization and mapmaking (SLAM) process, matchmoving process or similar image processing algorithm; and a photogrammetric process.
  • In one embodiment of the present technology, the feature-based process becomes 36 used by a Simultaneous Localization and Map Making (SLAM) process.
  • The Simultaneous Localization and Map Generation (SLAM) process uses image sequences from one or more video cameras to determine fixed features and then creates a map of those fixed features. Two image processing methods can be used.
  • The first image processing method used in the Simultaneous Localization and Map Making (SLAM) process is image segmentation and feature extraction. SLAM uses these to identify specific objects that are known to be stationary and thus sound reference points in three-dimensional (3D) space. Typical selections are objects with reasonably well-defined characteristics, often corners in the open or wall fixtures of various kinds (lamps, switches, window sills or corners) indoors. These characteristics can then be processed with software to provide features within the algorithm.
  • The second image processing technique used in the Simultaneous Localization and Mapmaking (SLAM) process is the stereo imaging technique used to extract depth and therefore distance-to-object information. SLAM creates a map of features in a three-dimensional (3D) coordinate grid as it images them from different robot positions, determining its own position in that grid. The mapping and self-locating process is applied in a Kalman filter that estimates all variables. In this case, area extraction is performed using stereo imaging of multiple overlapping two-dimensional (2D) images.
  • In one embodiment of the present technology, the Simultaneous Localization and Map Making (SLAM) method employs a video camera.
  • In one embodiment of the present technology, Simultaneous Location and Map Generation (SLAM) method employs a charge coupled device (CCD). CCD is a device for moving an electrical charge, typically from within the device to an area where the charge can be manipulated, such as the conversion to a digital value. This is accomplished by "shifting" the signals between stages within the device one at a time. Technically, CCDs are used as displacement registers that move charge between the capacitive containers in the device, the displacement allowing the transfer of charge between the containers. The CCD is often integrated with an image sensor, such as a photoelectric device for generating the charge which is read, thus making the CCD an important technique for digital image processing.
  • In one embodiment of the present technology, the Simultaneous Localization and Mapmaking (SLAM) process employs a CMOS sensor-equipped video camera.
  • In one embodiment of the present technology, the Simultaneous Localization and Mapmaking (SLAM) method employs a narrow field of view (FOV). For a given sensor size, this provides a higher resolution view of a smaller total area of the visible world and would thus allow the detection of smaller objects. A wide FOV allows the camera to have larger items or Can capture objects over a larger spatial area, but does not provide the same resolution for a given sensor. The instrument would include a SLAM processing algorithm that receives images at a fixed frame rate or at a variable frame rate determined by the dynamics of the device, and then the positions of the features it identifies and the position of the instrument all in one for the Application output appropriate coordinate frame. Cartesian coordinates in relation to the initial orientation of the instrument, Cartesian coordinates absolutely measured from a given origin, latitude / longitude elevation and earth-centered earth-proof; spherical coordinates with respect to the initial orientation of the instrument. For further reference, see: (i) Thomas Lemaire, Cyrille Berger, Il-Kyun Jung and Simon Lacroix, "Vision-Based SLAM: Stereo and Monocular Approaches", International Journal of Computer Vision 74 (3), 343-364, 2007 ; and (ii) Moritz Köhler, Shwetak N. Patel, Jay W. Summet, Erich P. Stuntebeck and Gregory D. Abowd, Institute for Pervasisve Computing, Department of Computer Science ETH Zurich, 8092 Zurich, Switzerland, "TrackSense: Infrastructure Free Precise Indoor Positioning Using Projected Patterns " ,
  • In one embodiment of the present technology, the feature-based process becomes 36 applied using a matchmoving process. The matchmoving process involves several steps. The first step is finding and tracking items.
  • In one embodiment of the present technology, the traceability process consists of two steps. In the first step, position and orientation references are derived from the features of the image. This step is commonly referred to as the "feature recognition".
  • The second step involves solving for a three-dimensional (3D) movement. In this process, the motion of the image capture device is attempted 34 (from 2 ), by the backprojection of transformations detected in the features of an image from the 2D image plane into an estimate of the 3D motion of the image capture device 34 be solved. More specifically, after taking a spot on the surface of a three-dimensional object, its position in the two-dimensional (2D) frame can be calculated by a three-dimensional (3D) projection function.
  • You can introduce the idea of an abstract camera. By definition, this abstract camera is an abstraction that contains all the parameters needed to model the image capture device 34 in a real or virtual world.
  • Therefore, an abstract camera is basically a camera vector that has as its elements the position of the image capture device 34 , their orientation, focal distance and other possible parameters that define how the image capture device 34 Light focused on the film plane. It is unimportant to know how exactly this camera vector is executed, as long as there is a compatible projection function P.
  • The projection function P uses as its input one camera sector (referred to as a camera) and another vector uses the position of a three-dimensional (3D) point in space (called xyz) and returns a two-dimensional (2D) point to a plane ahead of the Camera (referred to as XY) is projected. This is expressed as follows: XY = P (camera, xyz). (Equation 1)
  • In the case of a feature projection, for example, the cameras in the individual images i and j project the view onto a plane, depending on the parameters of the camera. In this way, the features traced in two-dimensional (2D) space correspond to the true features in a three-dimensional (3D) space.
  • However, the projection function transforms the true 3D feature and reduces the amount of information it contains. Without knowing the full information content of the component, a backprojection function P 'can only return one set of possible 3D points that form a line that passes from the center of the camera through a projected 2D point. A similar ambiguity arises in interpreting arbitrary orientation information contained in the projected feature. The back projection is expressed as follows: xyz ∈ P '(camera, XY). (Equation 2) or {xyz: P (camera, xyz) = XY}. (Equation 3)
  • In one embodiment of the present technology, the real point xyz will remain at the same location in the real space from one frame of the image to the next when the features are on the surface of a rigid object, such as a building: (xyz) i = (xyz) j ; (Equation 4) where subscripts i and j refer to arbitrary frames in the image being analyzed. It follows that: P '(camera i , XY i ) ∩ P' (camera j , XY j ) ≠ {} (equation 5)
  • Since the value of XY has been set for all frames through which the feature was traced by the tracking program, one can solve the backprojection function between any two frames as long as P '(camera i , XY i ) ∩ P' (camera j , XY j ) is a small set of possible camera vectors that solves the equation at i and j (denoted as C ij ). C ij = ((camera i , camera j ): P '(camera i , XY i ) ∩ P' (camera j , XY j ) ≠ {}); (Equation 6)
  • It follows (from Eq 6) that there is a set of camera vector pairs C ij for which the intersection of the backprojections of two points XY, and X is a non-empty set centered around the stationary point xyz.
  • Furthermore, it follows from Eq. 6 that for each position of the image capture device 34 in space there is a set of corresponding parameters (orientation, focal distance, etc.) that will pick up a one-point feature in exactly the same way. However, since a set of camera sector pairs C ij has an infinite number of elements, a one-point feature is not sufficient for determining the actual position of the image capture device 34 out.
  • The more tracking information exists in the form of additional point features or additional orientation information, the more accurate the actual position of the image capture device 34 be determined.
  • For a set of points {(xyz) i, 0 , ..., (xyz) i, n } and {(xyz) j, 0 , ..., (xyz) j, n }, where i and j are still Referring to frames and n is an index of one of the many tracked tracking features, a set of camera vector pairs may be derived.
  • Using this approach of multiple tracks reduces the number of possible parameters of the camera. The set of possible camera parameters that fit, F, is the intersection of all sentences: F = C i, j, 0 ∩ ... ∩ C i, j, n (Eq. 7)
  • The smaller the number of elements in this set F, the closer one comes to an extraction of the actual parameters of the image capture device 34 ,
  • Due to the errors introduced in the tracking process, there is a need for a statistical approach to determining a camera sector for each frame. Optimization algorithms and bundle block adjustments can be used to limit the possible camera movement solutions.
  • Three-dimensional matchmoving tools allow you to extrapolate three-dimensional information from two-dimensional images. 3D matchmovable programs include, but are not limited to, the following:
    Voodoo (freeware; Scenespector VooCAT);
    Icarus (University of Manchester);
    Maya Live;
    The Pixel Farm PFTrack;
    PFHoe (based on PFTrack algorithms);
    REALVIZ MatchMover;
    Science.D.Visions 3DEqualizer (won an Academy Award for Technical Achievements);
    Andersson Technologies SynthEyes; and
    Boujou (won an Emmy Award in 2002)
  • In one embodiment of the present technology, the feature-based process becomes 36 applied using a photogrammetric process.
  • Photogrammetry is a procedure for determining the geometric properties of photographic images. In the simplest example, the distance between two points lying on a plane parallel to the photographic image plane can be determined by measuring their distance on the image if the scale s of the image is known. This is achieved by multiplying the measured distance by 1 / s.
  • A more sophisticated method, called stereophotogrammetry, involves estimating the three-dimensional coordinates of points on an object. These are determined by means of measurements taken in two or more measurement images taken from different points of view (see stereoscopy). Common points are determined on each image. A line of sight (or a ray) can be created from the location of the camera to the point on the object. The intersection of these rays (triangulation) determines the three-dimensional location of the point. Matured algorithms may exploit other prior art information about the scene, such as symmetries, which in some cases allow the reconstruction of 3D coordinates from only one camera position.
  • Photogrammetric algorithms typically express the problem as minimizing the sum of the squares of a set of errors. This minimization is known as beam-balancing and is often done using the Levenberg-Marquardt (LMA) algorithm, which provides a numerical solution to the problem of minimizing a basically non-linear function across the space of the parameters of the function. These minimization problems are particularly evident in least squares curve fitting and nonlinear programming curve fitting.
  • The Levenberg-Marquardt algorithm (LMA) interpolates between the Gauss-Newton algorithm (GNA) and the gradient descent method. The Levenberg-Marquardt (LMA) algorithm is more robust than the Gauss-Newton (GNA) algorithm, which means that in many cases it will find a solution even if it starts very far from the final minimum.
  • The 3D coordinates define the locations of the object points in the 3D space. The image coordinates define the locations of the images of the object points on the film or an electronic image capture device. The camera's external orientation defines its location in space as well as its viewing direction. The inner orientation defines the geometric parameters of the image processing process. This is mainly the focal distance of the lens, but may also include the description of lens distortions. Additional additional observations play an important role: The connection to the basic units of measurement is made by means of scale scales - basically, a known distance of two points in space or known fixed points.
  • Photogrammetric data with dense area data from scanners complement each other. Photogrammetry is more accurate in the dimensions nearly parallel to the image plane, and the region data is typically more accurate in the dimension perpendicular to the image plane. This range data can be obtained from methods such as LiDAR, laser carters (which use time-of-flight, triangulation, or interferometry), white light digitizers, and any other method that scans an area, and x, y, z coordinates for multiple discrete points (commonly called "point clouds") returned.
  • A 3D visualization can be created to georeference the photos and LiDAR data in the same reference frame. Methods such as adaptive least squares stereo fitting are then used to create a dense array of correspondences that are converted by a camera model to produce a dense array of x, y, z data.
  • Still referring to 2 In one embodiment of the present technology, a feature-based process becomes one 36 through the use of a georeferenced image-based process, as in 3 illustrated, applied. The georeferenced image-based process 36 uses the "image-based georeferencing" method described by James M. Janky et al. in the submitted on 14.09.2009 Patent Application Serial No. 12 / 559,322. The patent application entitled "Image-Based Georeferencing" is incorporated herein in its entirety.
  • More specifically, is the image processing machine 62 (from 3 ) basically a series of computer programs that take a picture of the image capture device 64 takes a contour picture of the objects seen with the help of a Bildkonturestellers 68 creates a search for a similar contour by searching the georeferenced object image local database 66 performs features in the camera image 70 searches for and searches for a match with features in the database (using the pattern recognition contour adjustment process 72 ) and checks whether the features found during the matching process have georeferenced location coordinates.
  • Still referring to 3 In one embodiment of the present technology, when there is a match with georeferenced coordinates, the georeferenced retrieval program extracts 74 these coordinates from the database and the location determiner 76 determines the location coordinates of the image capture device 64 using feature-based processing techniques such as photogrammetry, matchmoving, etc. See discussion above.
  • Still referring to 3 For example, in one embodiment of the present technology, the initial location of the image capture device 64 over the block 78 at any precision level, such as: (a) via a GNSS receiver; or (b) manually, as is the case when using two names for an intersection; or (c) an approximate latitude / longitude. In this embodiment of the present technology, this input method may be the initial positioning of the image capture device 64 speed up the search process by going straight to the area of interest.
  • Still referring to 3 For example, in one embodiment of the present technology, manual position input may be by means of a handheld device such as the Trimble TSC2 (Trimble Survey Controller Model 2 ).
  • Still referring to 3 In one embodiment of the present technology, the image processing engine may 62 be configured to be in a portable computing device such as a TSC2 data collector or a laptop or electronic organizer or the Apple iPad. The inputs of the initial (seed) location of the image capture device 64 can be performed on these devices.
  • Still referring to 3 For example, in one embodiment of the present technology, the communication device 80 for providing an (initial) seed location of the image capture device 64 be used.
  • In one embodiment of the present technology, a wireless system including Wi-Fi, cell phones, ZigBee, or the like may be used to connect the communication device 80 to be used with an external database.
  • In one embodiment of the present technology, the computer remote may be General Georeferenced Object Image Database 82 a well-stocked database of localized objects, such as the roof corners of buildings, front doors, window sills, street signs, hydrants, etc., virtually everything on earth, geolocalized, with a picture of some arbitrary angle.
  • For this reason, the remote computer can use General Georeferenced Object Image Database 82 for entering a seed location of the image capture device 64 be used.
  • If this is the case, a highly localized update may be made to the Locally Stored Georeferenced Item Image Database 66 from the computer remote Geo-referenced object image database 82 be downloaded. Using the features in the pattern recognition contour fitting program 72 For example, an image rotation and translation can be performed as part of the search for a match to the locally captured image.
  • In one embodiment of the present technology, more specifically, in the case of obtaining a seed position determination using GPS positioning from a GPS / GNSS receiver or location information obtained by other means may be combined with a camera image by using the Exchangeable Image File Format. Exif is a definition of the image file format used by digital cameras. The definition uses the existing file formats JPEG, TIFF Rev. 6.0 and RIFF WAV to which specific metadata tags are added.
  • In one embodiment of the present technology in which seed positioning is obtained from a GPS / GNSS receiver by GPS positioning, the accuracy of seed positioning depends entirely on the state of development and power level of the GPS receiver. Simple GPS chipset receivers used in camera-equipped mobile phones provide absolute accuracy of approximately 4-7 meters anywhere in the world.
  • On the other hand, more sophisticated receivers use a variety of correction techniques that can dramatically improve accuracy. For example, the US Federal Aviation Administration's Wide Area Augmentation Service transmits signals from 2 synchronous satellites on the same frequency as the GPS signal with a special code and improves accuracy up to about 1 meter across the country. Other differential services offer improvements up to about 20 cm. Finally, the real-time kinematic method with the virtual reference station service can provide an accuracy of up to about 2-5 cm with respect to a known reference point.
  • Still referring to 3 In one embodiment of the present technology, a camera image of a portable image processing machine becomes 62 delivered to a more accurate determination of the position of the camera, as it would be possible with a simple GPS receiver. The camera image is delivered in real-time or can be post-processed according to the principles outlined in a co-pending "Image-Based Georeferencing" application.
  • Still referring to 3 In one embodiment of the present technology, the portable image processing engine becomes 62 with the Local Georeferenced Item Database 66 containing images of buildings and other objects including the georeferenced features found in the database. The georeference data may consist of longitudes, latitudes, and elevation information, or may be stored in the form of "northbound and eastbound" from a local reference point, such as the terrain surveyor's markings installed and maintained by the US Geological Survey Administration. Other coordinate systems can also be used.
  • In one embodiment of the technology, the Local Georeferenced Item Database is 66 to be searchable by activating it with the appropriate tags.
  • In one embodiment of the technology, the Local Geo-referenced Item Image Database 66 be applied using Google's StreetView or a service of a similar nature. The Local Georeferenced Item Database 66 Connects location information to the location tags in latitude and longitude coordinates associated with each section of images, thus enabling a search engine. For this reason, a user may find an image of a street view based on their address or based on their location in the latitude and longitude coordinates. The intersections are also available for searching.
  • In one embodiment of the technology, the Local Geo-referenced Item Image Database 66 can be used by using the georeferenced item image. The georeferenced item image database contains a variety of precise position data concerning features of buildings and edges, stop signs, street signs, hydrants, and the like. The georeferenced item image database further comprises the image processing engine, equipped with suitable analysis software, configured to derive from known feature locations on a given item features that may be in the user-captured image. This derivation can be performed using the interpolation method, which uses appropriate geometric transformations to normalize the image, and finding the geometric relationships (distance and direction) of known georeferenced features to unknown but more appropriate features.
  • In one embodiment of the technology, determining the position of a camera of data in an image is performed using photogrammetric methods, which is well known in the art. See discussion above.
  • In one embodiment of the technology, the range scaling factor needed to perform photogrammetric solutions may be found from the processing of georeferenced data associated with items of interest in the captured image. The georeferenced data for two or more points allows to generate the range scaling factor by simply calculating the three-dimensional distance between two selected points using the well-known formula: Distance = ↦ ((x1 - x2) 2 + (y1 - y2) 2 + (z1 - z2) 2 , (Eq. 8) where x, y and z are the georeferenced coordinates of the points associated with the object of interest.
  • In one embodiment of the technology, the image processing machine ( 62 from 3 ) standard photogrammetric algorithms for image processing, which then allow a calculation of the location of the camera based on selected reference points in the captured image. The selection process employs a search routine that finds edges (intersections of two lines) or corners in the captured image. Edges or corners with the sharpest corner or the sharpest point are automatically selected. If the selected edges / corners are not associated with a georeferenced data point, then the interpolation algorithm is used to estimate the georeferenced data for the selected points. (See discussion above).
  • In one embodiment of the technology, the selected reference points in the captured image are then used to calculate the position of the camera 64 used. If three or more points have been selected then the calculation process will be guided through a series of steps to directly calculate the position.
  • A scaling factor is determined over the calculable distances between the selected reference points through their georeferenced location data. The scaling factor is found in the form of physical distance in meters or feet, or the opposite angle on the earth's surface.
  • Next, an angle between the first two georeferenced points, as in 4 shown, determined. More specifically, illustrated 4 the photogrammetry method for finding a distance 106 to a camera 92 from a known distance D 1 101 between 2 points P 1 96 and P 2 98 , (Scaling factor) and pixel conversion to an opposite angle. In geometry, an angle opposite an arc is one whose two rays pass through the endpoints of the arc.
  • In a digital camera 92 this is done by measuring the distance between the two points P 1 96 and P 2 98 and then the ratio of this number to the total number of pixels 100 is taken in the field of view of the camera. The distance 106 from the mean 107 the line between the 2 selected georeferenced points to the entrance pupil of the camera 94 is made using half of this angle A 102 and half the distance 1 / 2D 104 between the 2 georeferenced points since the tangent of the half angle between the two selected points is given by the ratio of the distance from the camera to the half known distance between the two points for a type of problem solving. Tan (A) = D 1 / 2D 2 (equation 9)
  • In one embodiment of the technology, this process of determining more range estimates from the midpoints of lines connecting any two georeferenced points to objects in the captured image may be performed. Now, the midpoint between any two known georeferenced points can also be calculated in the sense of a georeferenced coordinate system.
  • The distances just described do not represent the distances required to determine the position of the camera. However, the hypotenuse (long side) 108 showing the actual distances from point P 1 96 to the entrance pupil of the camera 94 is, (and the hypotenuse 110 showing the actual distance from point P 2 98 to the entrance pupil of the camera 94 is) with this information now be calculated for a kind of problem solution as follows: Distance (P1 camera) = 1/2 D.sin (A); (Equation 10) where 1 / 2D is half the distance between P 1 and P 2 and A is the half angle of the total angular shift for the two points P 1 and P 2 .
  • Still referring to 4 In one embodiment of the technology, as a tool for understanding the next steps, the georeferenced points P 1 serve 96 and P 2 98 now as the centers of bullets, and the distance from each point to the entrance pupil of the camera 94 now provides a radius for each sphere. Thus, with a minimum of three georeferenced points, three lines with three points provide three equations representing the distance to the camera. That is, the three balls will intersect at the entrance pupil of the camera - with some errors. Solving for the location of this intersection (three equations in three unknowns) now provides the georeferenced point of the entrance pupil. This is the triangulation method.
  • In one embodiment of the technology, the equation system is overdetermined if there are more than three known points. The majority of photogrammetric programs use significantly more error reduction points.
  • The least squares method is a standard approach to approximate overdetermined systems, i. H. Sets of equations in which there are more equations than unknowns. "Least squares" means that the total solution minimizes the sum of the squares of the errors made in solving each individual equation.
  • The most important application is in data adaptation. The best least squares fit minimizes the sum of the residuals in square, where a residual is the difference between an observed value and the value provided by a model. Smallest square problems fall into two categories - linear least squares and non-linear least squares - depending on whether the residuals in all unknowns are linear or not. The linear least squares problem is encountered in statistical regression analysis; it has a closed solution. The nonlinear problem has no closed solution and is usually solved by an iterative refinement; at each iteration the system is approximated by a linear one, thus the kernel computation is similar in both cases. Least squares meet the highest probability criterion if the experimental errors are normally distributed and can also be derived as a momentum estimation method. Also, the least squares method can be used to fit a generalized linear model by iteratively applying local quadratic approximation to the likelihood.
  • There are many photogrammetric programs available which perform the above mentioned steps. Furthermore, the process of determining the exact orientation of the taking camera with respect to the georeferenced points to compensate for the fact that a tilt exists in the line system is also considered. A plurality of reference points or at least two images from two different camera locations provide enough data to determine the location of the camera.
  • In case the user chooses to take more than one picture of the scene in which the subject matter is of interest, additional processing to handle this other important case is readily available. This process can be done at once using the procedure known as "bundle balancing".
  • In view of an image indicating a certain number of 3D points from different angles of view, bundle adjustment may be a problem of simultaneously refining the 3D coordinates describing scene geometry as well as the parameters of relative motion and optical characteristics of the picture of the images inserted camera (s) according to an optimality criterion, which includes the corresponding image projections of all points defined.
  • Bundle alignment is almost always used as the last step of any feature-based 3D reconstruction algorithm. It amounts to an optimization problem on the 3D structure and the viewing parameter (i.e., the pose of the camera and eventual intrinsic calibration and radial distortion) to obtain a reconstruction that is optimal under certain assumptions with respect to the noise affecting the observed image features.
  • Bundle equalization is the maximum likelihood estimator if the image error is mean free and Gaussian. The name refers to the light "bundles" that emerge from each 3D feature and converge in the optical center of each camera, which are optimally balanced both in terms of structure and viewing parameters.
  • During the beam-balancing process, the re-projection error between the image locations of observed and predicted pixels is minimized, expressed as the sum-of-squares of a large number of non-linear, real-valued functions. Therefore, minimization is achieved using nonlinear least squares algorithms. By iteratively linearizing the function to be minimized in the vicinity of the current estimate, the Levenberg-Marquardt algorithm involves the solution of linear systems known as normal equations. In the solution of minimization problems resulting from the bundle adjustment, the normal equations have a sparse block structure due to the lack of interaction among the parameters fair different 3D points and cameras. This can be exploited to provide computational benefit by employing a sparse variant of the Levenberg-Marquardt algorithm that explicitly takes advantage of the null pattern of the normal equations and avoids storage and operation on null elements.
  • In one embodiment, the present technology is illustrated 5 the flowchart 120 , the implementation steps of the feature-based process 36 (from 2 ) using the georeferenced object image local database 66 and the image processing machine 62 (from 3 ) describes.
  • More specifically, in one embodiment, the image processing engine 62 (from 3 ) in step 124 initialized with data that has been determined to be relevant to the desired location area. Next, the image capture device detects 64 in step 126 at least one image of an object of interest in a field of interest and provides (in step 128 ) of the image processing machine 62 (from 3 ) at least one captured image of interest. A pattern matching process is executed (in step 130 ) to match the contour of objects in the captured image and objects in the georeferenced object image local database 66 desirable. After at least one feature has been detected in the captured image of the object of interest (step 132 ), a search is made in the georeferenced object image local database 66 (Step 134 ) after a match between the selected feature in the captured image and the georeferenced feature in the database 66 executed. The search process is repeated for a selected number of feature matches (step 136 ). In step 138 are the photogrammetric image processing algorithms for determining the location of the entrance pupil position of the camera ( 94 from 4 ) are used in a georeferenced coordinate system taken from the geo-referenced object image database. Optional (step 140 ) includes the additional step 124 Furthermore, the input of a local reference position, which is a road with house number, the intersection of two streets, a landmark or a geo-referenced date is defined.
  • In one embodiment of the present technology shows 6 the device for image-based positioning 150 including a GNSS receiver 152 standing at a post 154 is attached, being a camera 156 at the same post 154 is attached. Furthermore, it shows the GIS / Survey data collector 162 such as a TSC2.
  • In one embodiment, the present technology is illustrated 7 a computer system 170 that is configured to the image processing machine 62 from 3 to activate. The hardware part includes a processor 172 , non-volatile, computer-usable memory (ROM) 174 , a volatile memory usable by a computer 176 , a data storage unit 178 , a data bus 180 , an image database management system (IDMS) 182 , a display device 183 , an alphanumeric input 184 , a cursor control 186 , an I / O device 188 and peripheral computer readable storage medium 190 , The software block 192 includes operating system 194 , Applications 196 , Modules 198 and data block 200 , This is merely an example of such a computer system. Actual computer systems that do not include all of the listed components or unlisted parts may still be suitable for activating the image processing engine.
  • III. Image based positioning device including a dual object tracking process
  • In one embodiment of the present technology shows 8th an image-based positioning device 210 including the dual feature tracking process 212 , This process 212 can be applied by a general-purpose processor or by using an application-specific processor (ASIC, FPGA, PLD, etc.).
  • In an embodiment of the present technology, the image-based positioning device comprises 210 and at least two image capture devices 214 and 216 (the third device 218 is optional) which is attached to the platform 211 are attached.
  • In one embodiment of the present technology, the image capture devices 214 and 216 overlapping fields of view.
  • In one embodiment of the present technology, the image capture devices 214 and 216 no overlapping fields of view.
  • In one embodiment of the present technology, the platform includes 211 a rover.
  • In one embodiment of the present technology, the platform includes 211 a Rover RTK system.
  • In one embodiment of the present technology, the platform includes 211 a GIS / mapping handset.
  • In one embodiment of the present technology, each image capture device is 214 and 216 (and optional 218 ) for capturing an image, including at least one feature at a first position of the platform 211 and at a second position of the platform 211 ,
  • In an embodiment of the present technology, the image-based positioning device comprises 210 also a synchronization block 226 for synchronizing the first image capture device 214 and the second image capture device 216 (and optionally the third image capture device 218 ) is configured. See discussion below.
  • In one embodiment of the present technology, the synchronization block becomes 226 using a control signal generated by a controller (not shown).
  • In an embodiment of the present technology, the image-based positioning device comprises 210 Furthermore, a position process 220 selected from the group consisting of: a GNSS process; an image-fitted photogrammetric process; a georeferenced image-fitted process; a matchmoving process; a surface tracking process and a SLAM process.
  • The operation of a GNSS process; an image-matched photogrammetric process, a matchmoving process, a surface tracking process, and a SLAM process have been extensively disclosed in the discussions above. The position process 220 is to get the position of the platform 211 configured.
  • In one embodiment of the present technology, the dual feature-based process is 212 configured to each at the first and second position of platform 211 process received image and extract a set of tracking data for at least two features. The double-feature-based process 212 is also configured to the location of the second position of the platform 212 by using the set of tracking data obtained for each of the at least two detected features.
  • In one embodiment of the present technology, the process is 212 further configured to process an image comprising at least one feature located at the first and second positions of the platform 211 from the third image capture device 218 to extract a set of tracking data for at least one detected feature. In this embodiment of the present technology is process 212 also configured to the location of the second position of the platform 212 by using the set of tracking data obtained for each at least one detected feature.
  • In an embodiment of the present technology, the image-based positioning device comprises 210 also a Kalman filter 222 , Kalman filter 222 is configured to make a Kalman estimate of the second position of the platform 211 by combining the set of tracking data from at least one first detected feature as a first high-noise measurement and the set of tracking data from at least one second detected feature as a second high-noise measurement. Optionally, the Kalman filter 222 configured to make a Kalman estimate of a second position of the platform 211 by obtaining a set of tracking data from at least one first detected feature as a first high-noise measurement, the set of trace data from at least one second detected feature as a second high-noise measurement, and the set of trace data from at least one third sensed feature as a third high-noise measurement Measurement can be combined.
  • In an embodiment of the present technology, the image-based positioning device comprises 210 an external memory block 224 configured to store at least one feature-based three-dimensional (3D) position coordinate of the platform for further processing.
  • In an embodiment of the present technology, the image-based positioning device comprises 210 a wireless modem 228 that is configured to an external memory block 224 Grant remote access to the Internet.
  • IV. Operating Modes of the Image Based Positioning Device Including the Dual Object Tracking Process
  • A. Synchronous operation
  • In one embodiment of the present technology, the synchronous operation includes the image-based positioning device 210 from 8th Capturing a first image by using a first image capture device 214 (a first-first image) at a first position of the platform 211 wherein the first-first image comprises at least a first article.
  • In one embodiment of the present technology, the position of the platform 211 by using the position process 220 (from 8th ) selected from the group consisting of: GNSS process, a surface tracking process, the feature-based process, and the georeferenced image-based process.
  • In one embodiment of the present technology, a position of the platform becomes 211 predetermined.
  • Next, an image is formed by using the first image capture device 214 at a second position of the platform 211 detected (a second-first image), wherein the second-first image comprises at least one of the same first detected objects.
  • The first-first image and the second-first image are created by using the process 212 processed to dock with and track the location of at least a first detected item. process 212 is configured to obtain a set of two-dimensional position determinations for at least one first detected item from the processing of the first-first set and the second-first frame, which provide a set of tracking data for the first captured item.
  • In one embodiment of the present technology, a tracking algorithm may be used to dock to at least one detected first item and to follow the docked first item through a series of multiple frames. See the discussion above and equations (1-7).
  • Similarly, a first image is formed by using a second image capture device 216 (from 8th ) (a first-second image) at the first position of platform 211 detected, wherein the first-second image comprises at least a second object.
  • In one embodiment of the present technology, a synchronization block 226 for synchronizing the operation of the first image capture device 214 (from 8th ) and the second image capture device 216 (from 8th ) are used so that both devices (respectively) first and capture second item located in the corresponding FOV for each device at the same time to which the platform is 211 at the same position.
  • A second image is taken using a second image capture device 216 (from 8th ) (a second-second image) from at the second position of the platform 211 detected; wherein the second-second image comprises at least one of the same second detected subject.
  • In one embodiment of the present technology, the synchronization block 226 for synchronizing the operation of the first image capture device 214 (from 8th ) and the second image capture device 216 (from 8th ) may be used to allow both devices (each) to detect first and second objects located in the corresponding FOV for each device at the same time as the platform 211 at the same position.
  • The first-second image and the second-second image are made using process 212 processed to track the location of at least one second detected item. process 212 is configured to obtain a set of two-dimensional position determinations for at least one second detected item from the processing of the first-second set and the second-second set that provide a set of tracking data for the second detected item.
  • In one embodiment of the present technology, the tracking algorithm may be used to track at least one detected second item and to track the first feature through a series of multiple frames. See the discussion above and equations (1-7).
  • The location of the platform 211 is from the dual feature process 212 determined by using the set of tracking data from at least one first item and the set of tracking data from at least one second item.
  • More specifically, process applies 212 in one embodiment of the present technology, an inverse projection function on the set of two-dimensional (2D) tracking data of at least one first detected object for solving for a set of three-dimensional (3D) coordinates for the position of the platform 211 at.
  • More precisely, the process applies 212 in one embodiment of the present technology, an inverse projection function on the set of two-dimensional (2D) tracking data of at least one second object for solving for a set of three-dimensional (3D) coordinates for the position of the platform 211 at. See the discussion above and equations (1-7).
  • In one embodiment of the present technology, a first weight is assigned to a set of tracking data from at least one first detected item, and a second weight is assigned to a set of tracking data from at least one second detected item. In this embodiment of the present technology, the determination of the location of the position of the platform 211 through process 212 by using a set of weighted tracking data from at least one first detected item and a set of weighted tracking data from at least one second detected item.
  • In one embodiment of the present technology, a Kalman filter is used 222 to obtain a Kalman estimate of the position of the platform 211 is used by combining a set of tracking data from at least one first detected object as a first high-noise measurement and a set of tracking data from at least one second detected object as a second high-noise measurement.
  • B. asynchronous operation.
  • In one embodiment of the present technology, the asynchronous operation of the image-based positioning device is based 210 from 8th on the first image capture device 214 and second image capture device 216 capture the corresponding images at different times (not synchronized).
  • In an embodiment of the present technology, the asynchronous operation includes the image-based positioning device 210 from 8th the steps of: acquiring a first image by using a first image capture device 214 (a first-first image) at a first position of the platform 211 ; wherein the first-first image comprises at least a first article; Capturing a second image by using the first image capture device 214 (a second-most picture) at a second position of the platform 211 ; wherein the second-first image comprises at least one detected first item; Processing the first-first image and the second-first image to track a location of at least one first detected object; wherein a set of two-dimensional position determinations for at least one detected first item is obtained from the processing of the first-first picture and the second-first picture, wherein a set of tracking data is provided for the detected first item.
  • In one embodiment of the present technology, the asynchronous operation includes the image-based positioning device 210 from 8th further comprising the steps of: acquiring a first image by using a second image capture device 216 (a first-second image) at a third position of the platform 211 ; wherein the first-second image comprises at least one second article; Capturing a second image by using a second image capture device 216 (a second-second image) at a fourth position of the platform 211 ; wherein the second-second image comprises at least one second detected object; Processing the first-second image and the second-second image to track a location of at least one second object; wherein a set of two-dimensional position determinations for at least one second detected object is obtained from processing the first-second image and the second-second image, wherein a set of tracking data for the second object is provided.
  • In one embodiment of the present technology, the asynchronous operation includes the image-based positioning device 210 from 8th further determining the position of the platform 211 using the set of tracking data from at least one first detected subject. In this embodiment of the present technology, a position of the platform 211 through the position process 220 be determined.
  • In one embodiment of the present technology, the asynchronous operation includes the image-based positioning device 210 from 8th further determining the location of the fourth position of the platform 211 by using a set of tracking data from at least one second detected subject. In this embodiment, the present technology may be another position of the platform 211 through the position process 220 be determined.
  • In this "asynchronous" embodiment of the present technology, a different position of the platform 211 obtained by a linear interpolation of the previous positions of the platform 211 running without new images with the devices 214 and 216 capture.
  • As another example, the moving platform 211 momentarily enter a "dark" area in which the first image capture device 214 and the second image capture device 216 not enough light to produce a usable image. Therefore, at least one position of the moving platform 211 in this dark area by performing a linear interpolation of the two previous (ie, before entering the "dark" area) positions of the platform 211 (by using both devices 214 and 216 ) are obtained without causing devices 214 and 216 capture any new images in the "dark" area. Interpolation may be based simply on the time distribution of the line between the available positions, or it may include information known about speed, acceleration and higher order derivatives of motion, as well as orientation and rotation information. These information used for interpolation may be from a dual feature process 212 or the position process 220 be derived.
  • V. Image-based positioning device including object and surface tracking process
  • In one embodiment, the present technology is illustrated 9 an image-based positioning device 260 including the feature and surface tracking process 270 , The process 270 can be applied by using an Aligemeinzweckprozessors or by using an application-specific processor (ASIC, FPGA, PLD, etc.).
  • In one embodiment of the present technology, the process includes 270 two subprocesses: the subprocess 270-1 configured to perform surface trace processing (see discussion below) and a sub-process 270-2 configured to perform feature tracing processing (see discussion above).
  • In an embodiment of the present technology, the image-based positioning device comprises 210 also two image capture devices 264 and 266 standing on a platform 262 are attached. In one embodiment of the present technology, the fields of view of the two image capture devices overlap. In one embodiment of the present technology, the two image capture devices do not overlap.
  • In one embodiment of the present technology, the platform includes 262 a rover.
  • In one embodiment of the present technology, the platform includes 262 a Rover RTK system.
  • In one embodiment of the present technology, the platform includes 262 a GIS / mapping handset.
  • In one embodiment of the present technology, the first image capture device is 264 for capturing an image of a surface at a first position of the platform 262 configured.
  • In an embodiment of the present technology, the surface may be selected from the group consisting of: a base; a top; a side surface; and a surface inclined at an arbitrary angle or the like.
  • In an embodiment of the present technology, the image-based positioning device comprises 260 Further, the range measuring device 280 configured to obtain a set of depth data of the selected surface.
  • In one embodiment of the present technology, the ranging device 280 be selected from the group consisting of: a point laser beam; a sonar; a radar; a laser scanner and a depth camera or similar.
  • An area measuring device for the spot laser beam 280 can be applied by using blue solid state lasers, red diode lasers, IR lasers, which can be continuous luminescent lasers or pulsed lasers or sequenced lasers, or a similar device.
  • A sonar range measuring device 280 can be applied by using an active sonar, including a tone transmitter and a receiver.
  • A radar range measuring device 280 can be applied by using a transmitter that emits either microwaves or radio waves that are reflected from the surface and detected by a receiver, usually at the same location as the transmitter.
  • A depth camera can be used by using a video camera that can capture video with depth information.
  • This camera has sensors that can measure the depth of each captured pixel using a principle called time-of-flight. It obtains 3D information by emitting typically infrared light pulses to all objects in the scene and scanning the reflected light from the surface of each object. Depth is measured by calculating the time-of-flight of a light beam as it exits the source and reflects off the objects on the surface. This cycle time is then converted into range information using the well-known speed of light.
  • Still referring to 9 In one embodiment of the present technology, the second image capture device is 266 configured to capture an image comprising at least one item at the first position and at a second position of the platform 262 includes.
  • In an embodiment of the present technology, the image-based positioning device comprises 260 also a synchronization block 268 for synchronizing the first image capture device 264 and the second image capture device 266 is configured.
  • In one embodiment of the present technology, the synchronization block becomes 266 by using a control signal generated by a controller.
  • In an embodiment of the present technology, the image-based positioning device comprises 260 Furthermore, a position process 274 which may be selected from the group consisting of: a GNSS process; an image-fitted photogrammetric process; a georeferenced image-based process; a SLAM process; a matchmaking process; a surface tracking process; or a similar device. The operation of a GNSS process, an image-matched photogrammetric process, a georeferenced image-based process, a SLAM process; a matchmoving process, a surface tracking process; were extensively disclosed in the discussions above. The position process 274 is to getting a position of the platform 262 configured.
  • In one embodiment of the present technology, the surface tracking sub-process is 270-1 configured to process an image of the selected surface from the first image capture device 264 at the first position of the platform 262 was obtained.
  • The surface finishing method and apparatus have been disclosed in the patent application "IMAGE-BASED TRACKING" by Hongbo Teng, Gregory C. Best and Sy Bor Wang, Serial No. 12 / 459,843, which is incorporated herein in its entirety.
  • More specifically, still referring to 9 is the image capture device 264 according to the US patent application "IMAGE-BASED TRACKTNG" configured to perform image capture of the selected surface, and the ranging device 280 is configured to obtain a set of depth data on a selected surface. Tracking the platform 262 is using surface tracking process 270-1 performed to analyze an image using the image processing algorithm 282 is configured.
  • In one embodiment of the present technology, the image processing algorithm takes 282 a global rigid movement. By parameterizing the global optical flow with the six degrees of freedom of the image capture device 264 For example, optimal global reformation between two consecutive frames can be found by solving the non-linear least-square problem.
  • In one embodiment of the present technology, an image processing algorithm is correct 282 with the optical characteristics of the pixels by using a frame function.
  • In one embodiment of the present technology, taking into account the information available, the image processing algorithm is correct 282 with the depth of the two frames (instead of the optical features of the pixels) by redefining the frame function.
  • In one embodiment of the present technology, the image processing algorithm 282 by matching a combination of optical pixel properties and depth information. This can be done either by using a combined cost function or by supporting one process with the other, as fully disclosed below.
  • In one embodiment of the present technology, the image processing algorithm sets 282 multiple coordinate systems: a stationary reference system; a reference system that is attached to the image capture device 264 is; and a 2D reference system at the sensor level of the image capture device.
  • In the stationary reference system, a point on the surface has coordinates x = (x, y, z), the image capturing device 264 is described by 6 vectors comprehensive position coordinates of the device x a = (x a , y a , z a ) and the orientation coordinates (Ψ i , Θ i , φ i ) (yaw, climb and roll angle) for each i th frame ,
  • In the with the image capture device 264 the same point on the surface has coordinates x i = (x i , y i , z i ) with respect to the image capture device 264 ,
  • In the one with the sensor level 32 The 2D reference system associated with the image capture device are the 2D pixel coordinates of a point in the ith frame: u i = (u i , v i ).
  • The relationship between the stationary 3D system and the 3D system connected to the image capture device is as follows:
    Figure 00330001
  • In which
    Figure 00330002
    is the rotation matrix between two systems.
  • The relationship between the 3D coordinates associated with the image capture device and the 2D pixel coordinates depends on the mapping function m of the image capture device 264 from. The mapping function draws 3D coordinates x i in the i-th frame system associated with the image capture device and maps in 2D pixel coordinates in the ith frame: U i = m (x i ) (Eq. 13)
  • The shape of the mapping function depends on the type of lens. In one embodiment of the present technology where the lenses comprise normal straight line lenses (in an inverted pinhole model), the mapping function m can be derived from the following equations:
    Figure 00340001
    where f is the focal length of the image capture device 264 , S u , S y are the pixel width and height. u 0 , v 0 are the offsets between the optical center and the center of the sensor.
  • In a further embodiment of the present technology wherein the lenses 16 orthographic fish-eye lenses, the mapping function m can be derived from the following equations:
    Figure 00340002
    where r is the distance between the point and the optical center
    Figure 00340003
    is.
  • In one embodiment of the present technology, the mapping function m may be calibrated and stored in numerical form.
  • To find out the opposite of the mapping function: X i = m -1 (u i ), (equation 16) the depth of the object point must be known.
  • In one embodiment of the present technology, the depth of an object point of a scene is obtained as a function of the pixel location in each frame z i = z i (u i ). These measurements are carried out in the 3D reference system connected to the image capture device.
  • The relationship between two sequential frames f i and f j is based on the assumption that the same point on the surface produces two pixels of the same intensity in two frames.
  • That is, if u i and u j are pixel locations in f i and f j of the same object point, f i (u i ) = f j (u j ). Here, f i (u i ) refers to the pixel intensity at u i in frame f i . Under this assumption, the relationship between two frames is a purely geometric transformation resulting from the motion of the imaging device.
  • The movement of the image capture device from f i to f j can be accomplished by
    Figure 00350001
    and
    Figure 00350002
    which is the relative displacement and rotation between frames, or as
    Figure 00350003
    which is a 6 vector with six degrees of freedom. If the position of the image capture device and position at frame f i is known, the solution of this relative movement from f i to f j provides us with the position and position at frame f j . In the following, the subscript i -> j is omitted whenever possible.
  • The same object point with the coordinates x i in the reference system of frame f i has coordinates x j in the reference frame of frame f j and:
    Figure 00350004
  • For this reason, the relationship between u i and u j in the 2D pixel coordinate systems is as follows:
    Figure 00350005
    where m is the mapping function. Or simply put u j = δP (u i ) (equation 19) where δP = m · ξ · m -1 represents the combination of the three operations.
  • The task now is to figure out the optimal,, so the cost function
    Figure 00360001
    is minimized. This is a well-researched nonlinear least-square problem. Solving generally involves a linear approximation and iteration. Different linear approximations lead to different convergence methods, such as Gauss-Newton, steepest descent, Levenberg-Marquar method, etc.
  • Still referring to 9 In one embodiment of the present technology, the second image capture device 266 configured to capture an image comprising at least one item at the first position and at the second position of the platform 262 includes.
  • In one embodiment of the present technology, the traceability subprocess is 270-2 configured to process each image received by the second image capture device 266 at the first and at the second position of the platform 262 and configured to extract a set of tracking data for at least one detected item.
  • Still referring to 9 In one embodiment of the present technology, it is a feature-tracking sub-process 270-2 also configured the location of the position of the platform 262 by using a set of tracking data to determine the at least one detected item.
  • Still referring to 9 In one embodiment of the present technology, the image-based positioning device includes 260 also a Kalman filter 272 , The Kalman filter is configured to make a Kalman estimate of the position of the platform 262 by combining the surface tracking based coordinates of the second position of the platform 262 as a first high-noise measurement and the feature-based coordinates of the second position of the platform 262 as a second high-noise measurement.
  • In an embodiment of the present technology, the image-based positioning device comprises 260 an external memory block 276 which stores at least one surface tracking and feature-based three-dimensional (3D) position coordinate of the platform 262 is configured for further processing.
  • In an embodiment of the present technology, the image-based positioning device comprises 260 furthermore, the wireless modem 278 that is configured to an external memory block 276 Provide remote access to the Internet.
  • VI. Operating an image-based positioning device including object and surface tracking process
  • Still referring to 9 In one embodiment of the present technology, operation of the image-based positioning device 260 including feature and surface tracking process 270 , following steps.
  • An image of a selected surface is acquired using the first image capture device 264 at the first position of the platform 262 detected. A set of depth data of the selected surface is acquired using the ranging device 280 calculated. A rigid global transformation of the set of captured image data and a set of selected surface-depth data into a set of 6-coordinate data is performed using the image processing algorithm 282 carried out; where the set of 6-coordinate data is the movement of the platform 262 represents. The set of 6-coordinate data is generated using the image processing image processing algorithm 282 processed to the location of the position of the platform 262 to obtain.
  • Still referring to 9 In one embodiment of the present technology, operation of the image-based positioning device 260 including feature and surface tracking process 270 Further, the following steps.
  • A first image is taken using the second image capture device 266 at the first position of the platform 262 detected, wherein the first image comprises at least one object. A second image is taken using the second image capture device 266 at the second position of the platform 262 detected; wherein the second image comprises at least one detected object.
  • The first image and the second image are processed to track the location of at least one detected item; wherein a set of two-dimensional position determinations for at least one detected object is obtained from the processing of the first image and the second image, thereby providing a set of tracking data for the detected object. See Equations (1-7).
  • The location of the second position of the platform 262 is determined using a set of tracking data from at least one detected subject. See Equations (1-7).
  • Ultimately, the position of the platform 262 by combining the surface tracking based coordinates of the position of the platform 262 and the feature-based coordinates of the position of the platform 262 certainly.
  • Still referring to 9 For example, in one embodiment of the present technology, a Kalman filter 272 to obtain a Kalman estimate of the position of the platform 262 used by the surface-tracking-based coordinates of the position of the platform 262 as a first high-noise measurement and the feature-based coordinates of the position of the platform 262 be combined as a second high-noise measurement.
  • In one embodiment of the present technology and as an alternative to combining two location estimates to develop an improved location estimate for the platform, the raw item tracking data and the raw surface tracking data may be combined in a common estimator and a single estimate of the platform location may be obtained. The method involves the use of elements from one of the estimation methods, such as SLAM, MoveMatch, surface tracking or photogrammetry. A Kalman filter can be used to perform the estimation in the same way that a microsquare solution can be used.
  • The above discussion has led to the operation of various exemplary systems and devices, as well as various embodiments associated with the example operating methods of these systems and devices. In various embodiments, one or more steps of an application method are performed by a process controlled by computer-readable and computer-executable instructions. For this reason, in certain embodiments, these methods are performed by computer.
  • In one embodiment, the computer-readable and computer-executable instructions may be stored on computer-readable media.
  • For this reason, one or more operations of various embodiments may be controlled or executed using computer-executable instructions, such as program modules executed by a computer. Typically, program modules include routines, programs, objects, components, data structures, etc. that perform certain tasks or implement some abstract data types. Furthermore, the present technology may also be used in distributed computing environments where tasks are performed by remote processing devices interconnected by a communications network. In a distributed computing environment, program modules may reside in both local and remote computer storage media, including storage devices.
  • Although specific steps of exemplary application methods are disclosed herein, these steps illustrate examples of steps that may be performed in accordance with various embodiments. That is, the embodiments disclosed herein are excellent for performing various other steps or variations of the steps shown. Moreover, the steps disclosed herein may be performed in a different order than indicated herein, and not each of the steps is necessarily to be performed in a particular embodiment.
  • While various electronic and software-based systems are discussed herein, these systems are merely examples of applicable environments, and it is not intended to be a limitation on the scope of the use or functionality of the present technology. Also, these systems should not be construed as being dependent upon or related to any component or function or combination thereof that is or will be illustrated in the disclosed examples.
  • Although the subject matter has been described in language specific to structural features and / or methodological acts, the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Instead, the specific features and acts described above are disclosed as exemplary forms of implementation of the claims.
  • QUOTES INCLUDE IN THE DESCRIPTION
  • This list of the documents listed by the applicant has been generated automatically and is included solely for the better information of the reader. The list is not part of the German patent or utility model application. The DPMA assumes no liability for any errors or omissions.
  • Cited non-patent literature
    • Thomas Lemaire, Cyrille Berger, Il-Kyun Jung and Simon Lacroix, "Vision-Based SLAM: Stereo and Monocular Approaches", International Journal of Computer Vision 74 (3), 343-364, 2007 [0036]
    • Moritz Köhler, Shwetak N. Patel, Jay W. Summet, Erich P. Stuntebeck and Gregory D. Abowd, Institute for Pervasisve Computing, Department of Computer Science ETH Zurich, 8092 Zurich, Switzerland, "TrackSense: Infrastructure Free Precise Indoor Positioning Using Projected Patterns " [0036]

Claims (26)

  1. An image based method of tracking the location of a platform that includes at least one image capture device, comprising: Capturing a first image with the image capture device, the first image comprising at least one article; Moving the platform and capturing a second image with the image capture device, the second image comprising at least one article; Capturing an image of a surface in the first image; Capturing a second image of the surface in the second image; Processing the plurality of images of the object and the surface using a combined feature-based process and surface tracking process; and Determining the location of the platform from the combined feature-based process and the surface tracking process.
  2. The method of claim 1, wherein the feature-based process is based on the tracking process beginning with the Simultaneous Localization and Map Making Method (SLAM).
  3. The method of claim 1, wherein the feature-based process is based on the matchmoving tracking method.
  4. The method of claim 1, wherein the feature-based process is based on a photogrammetric tracking process.
  5. The method of claim 1, wherein the image of the object is captured by a first image capture device and the images of the surface are captured by a second image capture device.
  6. The method of claim 5, further comprising: synchronizing the first image capture device and the second image capture device.
  7. The method of claim 1, further comprising: a GNSS receiver on the platform issuing a location for the platform; and Determining the location of the platform by combining the location output by the combined feature-based process and surface tracking process with the location output of the GNSS receiver.
  8. The method of claim 1, further comprising: Accessing georeferenced data correlated to the at least one item; and Determining the location of the platform based on the georeferenced data for the at least one item.
  9. An image based method for tracking the location of a platform that includes at least one image capture device, comprising: Capturing a first image with the image capture device, the first image comprising at least one article; Moving the platform and capturing a second image with the image capture device, the second image comprising the at least one article; Processing the plurality of images to track the location of the object using the feature-based process; Outputting coordinates for the platform based on the feature-based process; Capturing an image of the surface in the first image; Capturing a second image of the surface in the second image; Processing the plurality of images of the surface using a surface tracking process to track the location of the surface; Outputting coordinates for the platform based on the surface tracking process; and Determining the location of the platform by processing the coordinate output by the feature-based process and the surface-based process.
  10. The method of claim 9, wherein the feature-based process is based on the tracking process beginning with the Simultaneous Localization and Map Making Method (SLAM).
  11. The method of claim 9, wherein the feature-based process is based on the matchmoving tracking process.
  12. The method of claim 9, wherein the feature-based process is based on a photogrammetric tracking process.
  13. The method of claim 9, wherein the images of the subject are captured by a first image capture device and the images of the surface are captured by a second image capture device.
  14. The method of claim 13, further comprising: synchronizing the first image capture device and the second image capture device.
  15. The method of claim 9, further comprising: a GNSS receiver on the platform issuing a location for the platform; and Determine the location of the platform by combining the site outputs through the feature-based process, the surface-based process, and the GNSS receiver.
  16. The method of claim 9, further comprising: Accessing georeferenced data correlated to the at least one item; and Determining the location of the platform based on the georeferenced data for the at least one item.
  17. An image based method for estimating a location of a platform that includes an image capture device and a GNSS receiver, comprising: Determining the location of the platform using a GNSS receiver; Detecting at least one image at a first position of the image capture device; wherein the first image comprises at least one article; at least one image at a second position of the image capture device; wherein the second image comprises the at least one article; Processing the plurality of images to track a location of the at least one detected item; wherein a set of two-dimensional position determinations for the at least one detected item is obtained from processing the plurality of pictures, wherein a set of tracking data is provided for the at least one item; Determining the location of the image capture device by employing the set of tracking data from the at least one item; and Calculating a new location of the platform by combining the location of the image capture device with the location determined by the GNSS receiver.
  18. The method of claim 17, further comprising: Employing an inverse projection function applied to the set of two-dimensional (2D) tracking data of the at least one detected subject to solve for a set consisting of three-dimensional (3D) coordinates of the position of the image capture device.
  19. The method of claim 17, further comprising: Obtaining a set of two-dimensional (2D) tracking data for at least three features, wherein a set of two-dimensional (2D) tracking data for the at least three features is configured to determine three-dimensional (3D) position coordinates of the image capture device using a triangulation technique.
  20. The method of claim 17, wherein the platform is selected from the group consisting of: a GNSS rover; a GNSS rover RTK system; and a GIS / mapping handset.
  21. The method of claim 17, further comprising: a second image capture device on the platform; Capturing at least one image at the first position with the second image capture device; wherein the first image of the second image capture device comprises at least one second article; Capturing at least one image at a second position with the second image capture device; wherein the second image of the second image capture device comprises at least one second article; Processing a plurality of images to track the location of the at least one second article; wherein a set of two-dimensional position determinations for the at least one second object is obtained from processing the plurality of images, wherein a set of tracking data is provided for the at least one second object; Determining the location of the second image capture device by using the set of tracking data of the at least one second article; and calculating a new location of the platform by combining the location of the second image capture device with the location of the image capture device and the location determined by the GNSS receiver.
  22. The method of claim 21, further comprising: determining the weighted feature-based coordinates of the location of the platform.
  23. The method of claim 22, further comprising: Assigning a first weight to a set of tracking data of the at least one first item; Assigning a second weight to a set of tracking data of the at least one second item; and Determining feature-based coordinates of the location of the platform by using the set of weighted tracking data of the at least one first item and the set of weighted tracking data of the at least one second item.
  24. The method of claim 21, further comprising: Employing a Kalman filter to obtain a Kalman estimate of the second position of the platform by combining the set of tracking data of the at least one first feature as a first measurement with the set of tracking data of the at least one second feature as a second measurement.
  25. Image-based positioning device, comprising: an image capture device configured to capture a first image that includes at least one feature and a surface; the image capture device is configured to capture a second image comprising a feature and the surface; and a processor configured to process each of the images; wherein the processor is configured to determine the location of the platform by using a combined feature-based process and surface tracking process.
  26. Image-based positioning device, comprising: at least two image capture devices; each of the image capture devices is mounted on a platform; the first image capture device is configured to capture an image comprising at least one feature; the first detection device is configured to acquire a second image comprising at least one feature; the second image capture device is configured to capture an image including a surface; the second image capture device is configured to capture a second image that includes the surface; and a processor configured to process each of the images; wherein the processor is configured to determine the location of the platform using a combination of a feature-based process and a surface-tracking process.
DE112011102132T 2010-06-25 2011-06-23 Method and device for image-based positioning Pending DE112011102132T5 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US35842310P true 2010-06-25 2010-06-25
US61/358,423 2010-06-25
PCT/US2011/041590 WO2011163454A1 (en) 2010-06-25 2011-06-23 Method and apparatus for image-based positioning

Publications (1)

Publication Number Publication Date
DE112011102132T5 true DE112011102132T5 (en) 2013-05-23

Family

ID=45371815

Family Applications (1)

Application Number Title Priority Date Filing Date
DE112011102132T Pending DE112011102132T5 (en) 2010-06-25 2011-06-23 Method and device for image-based positioning

Country Status (5)

Country Link
US (2) US8754805B2 (en)
JP (1) JP6002126B2 (en)
CN (1) CN103119611B (en)
DE (1) DE112011102132T5 (en)
WO (1) WO2011163454A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013020307A1 (en) 2013-12-04 2014-08-14 Daimler Ag Method for determining the position of mobile object in multi-dimensional space, involves decoding emitted encoded signal with movement of object in transmitter unit and determining received positional information of object
DE102018100738A1 (en) 2018-01-15 2019-07-18 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Selective feature extraction

Families Citing this family (140)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9134127B2 (en) 2011-06-24 2015-09-15 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
US9109889B2 (en) 2011-06-24 2015-08-18 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
JP6002126B2 (en) 2010-06-25 2016-10-05 トリンブル ナビゲーション リミテッドTrimble Navigation Limited Method and apparatus for image-based positioning
US20160349057A1 (en) * 2010-10-13 2016-12-01 Elbit Systems Ltd. Multiple data sources pedestrian navigation system
US10168153B2 (en) 2010-12-23 2019-01-01 Trimble Inc. Enhanced position measurement systems and methods
US9182229B2 (en) 2010-12-23 2015-11-10 Trimble Navigation Limited Enhanced position measurement systems and methods
US9879993B2 (en) 2010-12-23 2018-01-30 Trimble Inc. Enhanced bundle adjustment techniques
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
WO2013082539A1 (en) * 2011-12-01 2013-06-06 Lightcraft Technology Llc Automatic tracking matte system
WO2013126877A1 (en) * 2012-02-25 2013-08-29 Massachusetts Institute Of Technology Personal skin scanner system
US9019316B2 (en) 2012-04-15 2015-04-28 Trimble Navigation Limited Identifying a point of interest from different stations
US9214021B2 (en) * 2012-10-09 2015-12-15 The Boeing Company Distributed position identification
US9235763B2 (en) 2012-11-26 2016-01-12 Trimble Navigation Limited Integrated aerial photogrammetry surveys
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9538336B2 (en) 2012-12-28 2017-01-03 Trimble Inc. Performing data collection based on internal raw observables using a mobile data collection platform
US9945959B2 (en) 2012-12-28 2018-04-17 Trimble Inc. Global navigation satellite system receiver system with radio frequency hardware component
US9639941B2 (en) 2012-12-28 2017-05-02 Trimble Inc. Scene documentation
US9645248B2 (en) 2012-12-28 2017-05-09 Trimble Inc. Vehicle-based global navigation satellite system receiver system with radio frequency hardware component
US9544737B2 (en) 2012-12-28 2017-01-10 Trimble Inc. Performing data collection based on external raw observables using a mobile data collection platform
US9835729B2 (en) 2012-12-28 2017-12-05 Trimble Inc. Global navigation satellite system receiver system with radio frequency hardware component
US9903957B2 (en) 2012-12-28 2018-02-27 Trimble Inc. Global navigation satellite system receiver system with radio frequency hardware component
US9880286B2 (en) 2012-12-28 2018-01-30 Trimble Inc. Locally measured movement smoothing of position fixes based on extracted pseudoranges
US9488736B2 (en) 2012-12-28 2016-11-08 Trimble Navigation Limited Locally measured movement smoothing of GNSS position fixes
US9821999B2 (en) 2012-12-28 2017-11-21 Trimble Inc. External GNSS receiver module with motion sensor suite for contextual inference of user activity
US9467814B2 (en) 2012-12-28 2016-10-11 Trimble Navigation Limited Collecting external accessory data at a mobile data collection platform that obtains raw observables from an external GNSS raw observable provider
US9429640B2 (en) 2012-12-28 2016-08-30 Trimble Navigation Limited Obtaining pseudorange information using a cellular device
US10101465B2 (en) 2012-12-28 2018-10-16 Trimble Inc. Electronic tape measure on a cellphone
US9456067B2 (en) 2012-12-28 2016-09-27 Trimble Navigation Limited External electronic distance measurement accessory for a mobile data collection platform
US9910158B2 (en) 2012-12-28 2018-03-06 Trimble Inc. Position determination of a cellular device using carrier phase smoothing
US9602974B2 (en) 2012-12-28 2017-03-21 Trimble Inc. Dead reconing system based on locally measured movement
US9612341B2 (en) 2012-12-28 2017-04-04 Trimble Inc. GNSS receiver positioning system
US9923626B2 (en) 2014-06-13 2018-03-20 Trimble Inc. Mobile ionospheric data capture system
US9369843B2 (en) 2012-12-28 2016-06-14 Trimble Navigation Limited Extracting pseudorange information using a cellular device
US9462446B2 (en) 2012-12-28 2016-10-04 Trimble Navigation Limited Collecting external accessory data at a mobile data collection platform that obtains raw observables from an internal chipset
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9208382B2 (en) 2013-03-08 2015-12-08 Trimble Navigation Limited Methods and systems for associating a keyphrase with an image
US9043028B2 (en) 2013-03-13 2015-05-26 Trimble Navigation Limited Method of determining the orientation of a machine
US20140267686A1 (en) * 2013-03-15 2014-09-18 Novatel Inc. System and method for augmenting a gnss/ins navigation system of a low dynamic vessel using a vision system
JP2014185996A (en) * 2013-03-25 2014-10-02 Toshiba Corp Measurement device
US9558559B2 (en) 2013-04-05 2017-01-31 Nokia Technologies Oy Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US9699375B2 (en) 2013-04-05 2017-07-04 Nokia Technology Oy Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
US9999038B2 (en) 2013-05-31 2018-06-12 At&T Intellectual Property I, L.P. Remote distributed antenna system
US9247239B2 (en) 2013-06-20 2016-01-26 Trimble Navigation Limited Use of overlap areas to optimize bundle adjustment
DE102013011969A1 (en) 2013-07-18 2015-01-22 GM Global Technology Operations LLC (n. d. Gesetzen des Staates Delaware) Method for operating a motor vehicle and motor vehicle
US9177384B2 (en) 2013-07-31 2015-11-03 Trimble Navigation Limited Sequential rolling bundle adjustment
US10185034B2 (en) 2013-09-20 2019-01-22 Caterpillar Inc. Positioning system using radio frequency signals
CN105531601B (en) 2013-09-20 2019-02-19 卡特彼勒公司 Positioning system
US9811731B2 (en) * 2013-10-04 2017-11-07 Qualcomm Incorporated Dynamic extension of map data for object detection and tracking
US9470511B2 (en) 2013-11-12 2016-10-18 Trimble Navigation Limited Point-to-point measurements using a handheld device
TWI537580B (en) 2013-11-26 2016-06-11 財團法人資訊工業策進會 Positioning control method
US10037469B2 (en) * 2013-12-10 2018-07-31 Google Llc Image location through large object detection
EP2913796B1 (en) * 2014-02-26 2019-03-27 NavVis GmbH Method of generating panorama views on a mobile mapping system
CN104881860B (en) * 2014-02-28 2019-01-08 国际商业机器公司 The method and apparatus positioned based on photo
US10288738B1 (en) * 2014-04-01 2019-05-14 Rockwell Collins, Inc. Precision mobile baseline determination device and related method
WO2015169338A1 (en) * 2014-05-05 2015-11-12 Hexagon Technology Center Gmbh Surveying system
US9420737B2 (en) * 2014-08-27 2016-08-23 Trimble Navigation Limited Three-dimensional elevation modeling for use in operating agricultural vehicles
KR20160031900A (en) * 2014-09-15 2016-03-23 삼성전자주식회사 Method for capturing image and image capturing apparatus
US10063280B2 (en) 2014-09-17 2018-08-28 At&T Intellectual Property I, L.P. Monitoring and mitigating conditions in a communication network
US9615269B2 (en) 2014-10-02 2017-04-04 At&T Intellectual Property I, L.P. Method and apparatus that provides fault tolerance in a communication network
US9503189B2 (en) 2014-10-10 2016-11-22 At&T Intellectual Property I, L.P. Method and apparatus for arranging communication sessions in a communication system
US9973299B2 (en) 2014-10-14 2018-05-15 At&T Intellectual Property I, L.P. Method and apparatus for adjusting a mode of communication in a communication network
CN104297762B (en) * 2014-10-17 2016-08-24 安徽三联交通应用技术股份有限公司 A kind of drive the mapping method examining subject three ground mapping instrument
US9312919B1 (en) 2014-10-21 2016-04-12 At&T Intellectual Property I, Lp Transmission device with impairment compensation and methods for use therewith
US9769020B2 (en) 2014-10-21 2017-09-19 At&T Intellectual Property I, L.P. Method and apparatus for responding to events affecting communications in a communication network
US9954287B2 (en) 2014-11-20 2018-04-24 At&T Intellectual Property I, L.P. Apparatus for converting wireless signals and electromagnetic waves and methods thereof
US9800327B2 (en) 2014-11-20 2017-10-24 At&T Intellectual Property I, L.P. Apparatus for controlling operations of a communication device and methods thereof
US10243784B2 (en) 2014-11-20 2019-03-26 At&T Intellectual Property I, L.P. System for generating topology information and methods thereof
US9544006B2 (en) 2014-11-20 2017-01-10 At&T Intellectual Property I, L.P. Transmission device with mode division multiplexing and methods for use therewith
US10009067B2 (en) 2014-12-04 2018-06-26 At&T Intellectual Property I, L.P. Method and apparatus for configuring a communication interface
US9876570B2 (en) 2015-02-20 2018-01-23 At&T Intellectual Property I, Lp Guided-wave transmission device with non-fundamental mode propagation and methods for use therewith
CN105989586A (en) * 2015-03-04 2016-10-05 北京雷动云合智能技术有限公司 SLAM method based on semantic bundle adjustment method
US9705561B2 (en) 2015-04-24 2017-07-11 At&T Intellectual Property I, L.P. Directional coupling device and methods for use therewith
US10224981B2 (en) 2015-04-24 2019-03-05 At&T Intellectual Property I, Lp Passive electrical coupling device and methods for use therewith
US9793954B2 (en) 2015-04-28 2017-10-17 At&T Intellectual Property I, L.P. Magnetic coupling device and methods for use therewith
US9871282B2 (en) 2015-05-14 2018-01-16 At&T Intellectual Property I, L.P. At least one transmission medium having a dielectric surface that is covered at least in part by a second dielectric
US9490869B1 (en) 2015-05-14 2016-11-08 At&T Intellectual Property I, L.P. Transmission medium having multiple cores and methods for use therewith
US9917341B2 (en) 2015-05-27 2018-03-13 At&T Intellectual Property I, L.P. Apparatus and method for launching electromagnetic waves and for modifying radial dimensions of the propagating electromagnetic waves
US9912381B2 (en) 2015-06-03 2018-03-06 At&T Intellectual Property I, Lp Network termination and methods for use therewith
US9866309B2 (en) 2015-06-03 2018-01-09 At&T Intellectual Property I, Lp Host node device and methods for use therewith
US9913139B2 (en) 2015-06-09 2018-03-06 At&T Intellectual Property I, L.P. Signal fingerprinting for authentication of communicating devices
US9997819B2 (en) 2015-06-09 2018-06-12 At&T Intellectual Property I, L.P. Transmission medium and method for facilitating propagation of electromagnetic waves via a core
US9820146B2 (en) 2015-06-12 2017-11-14 At&T Intellectual Property I, L.P. Method and apparatus for authentication and identity management of communicating devices
US9865911B2 (en) 2015-06-25 2018-01-09 At&T Intellectual Property I, L.P. Waveguide system for slot radiating first electromagnetic waves that are combined into a non-fundamental wave mode second electromagnetic wave on a transmission medium
US9640850B2 (en) 2015-06-25 2017-05-02 At&T Intellectual Property I, L.P. Methods and apparatus for inducing a non-fundamental wave mode on a transmission medium
US9509415B1 (en) 2015-06-25 2016-11-29 At&T Intellectual Property I, L.P. Methods and apparatus for inducing a fundamental wave mode on a transmission medium
US10205655B2 (en) 2015-07-14 2019-02-12 At&T Intellectual Property I, L.P. Apparatus and methods for communicating utilizing an antenna array and multiple communication paths
US9847566B2 (en) 2015-07-14 2017-12-19 At&T Intellectual Property I, L.P. Method and apparatus for adjusting a field of a signal to mitigate interference
US9853342B2 (en) 2015-07-14 2017-12-26 At&T Intellectual Property I, L.P. Dielectric transmission medium connector and methods for use therewith
US9882257B2 (en) 2015-07-14 2018-01-30 At&T Intellectual Property I, L.P. Method and apparatus for launching a wave mode that mitigates interference
US10044409B2 (en) 2015-07-14 2018-08-07 At&T Intellectual Property I, L.P. Transmission medium and methods for use therewith
US10148016B2 (en) 2015-07-14 2018-12-04 At&T Intellectual Property I, L.P. Apparatus and methods for communicating utilizing an antenna array
US9628116B2 (en) 2015-07-14 2017-04-18 At&T Intellectual Property I, L.P. Apparatus and methods for transmitting wireless signals
US10090606B2 (en) 2015-07-15 2018-10-02 At&T Intellectual Property I, L.P. Antenna system with dielectric array and methods for use therewith
US9871283B2 (en) 2015-07-23 2018-01-16 At&T Intellectual Property I, Lp Transmission medium having a dielectric core comprised of plural members connected by a ball and socket configuration
US9749053B2 (en) 2015-07-23 2017-08-29 At&T Intellectual Property I, L.P. Node device, repeater and methods for use therewith
US9948333B2 (en) 2015-07-23 2018-04-17 At&T Intellectual Property I, L.P. Method and apparatus for wireless communications to mitigate interference
US9912027B2 (en) 2015-07-23 2018-03-06 At&T Intellectual Property I, L.P. Method and apparatus for exchanging communication signals
US9461706B1 (en) 2015-07-31 2016-10-04 At&T Intellectual Property I, Lp Method and apparatus for exchanging communication signals
US9967173B2 (en) 2015-07-31 2018-05-08 At&T Intellectual Property I, L.P. Method and apparatus for authentication and identity management of communicating devices
US9904535B2 (en) 2015-09-14 2018-02-27 At&T Intellectual Property I, L.P. Method and apparatus for distributing software
US9769128B2 (en) 2015-09-28 2017-09-19 At&T Intellectual Property I, L.P. Method and apparatus for encryption of communications over a network
US9876264B2 (en) 2015-10-02 2018-01-23 At&T Intellectual Property I, Lp Communication system, guided wave switch and methods for use therewith
CN105371827A (en) * 2015-10-13 2016-03-02 同创智慧空间(北京)科技有限公司 Full-functional GNSS stereo camera surveying instrument
JP6332699B2 (en) * 2015-10-13 2018-05-30 株式会社amuse oneself Surveying equipment
US10355367B2 (en) 2015-10-16 2019-07-16 At&T Intellectual Property I, L.P. Antenna structure for exchanging wireless signals
US9918204B1 (en) 2015-12-08 2018-03-13 Bentley Systems, Incorporated High accuracy indoor tracking
US10072934B2 (en) 2016-01-15 2018-09-11 Abl Ip Holding Llc Passive marking on light fixture detected for position estimation
US9860075B1 (en) 2016-08-26 2018-01-02 At&T Intellectual Property I, L.P. Method and communication node for broadband distribution
US10312567B2 (en) 2016-10-26 2019-06-04 At&T Intellectual Property I, L.P. Launcher with planar strip antenna and methods for use therewith
US10225025B2 (en) 2016-11-03 2019-03-05 At&T Intellectual Property I, L.P. Method and apparatus for detecting a fault in a communication system
US10178445B2 (en) 2016-11-23 2019-01-08 At&T Intellectual Property I, L.P. Methods, devices, and systems for load balancing between a plurality of waveguides
US9893795B1 (en) 2016-12-07 2018-02-13 At&T Intellectual Property I, Lp Method and repeater for broadband distribution
US10389029B2 (en) 2016-12-07 2019-08-20 At&T Intellectual Property I, L.P. Multi-feed dielectric antenna system with core selection and methods for use therewith
US10168695B2 (en) 2016-12-07 2019-01-01 At&T Intellectual Property I, L.P. Method and apparatus for controlling an unmanned aircraft
US10243270B2 (en) 2016-12-07 2019-03-26 At&T Intellectual Property I, L.P. Beam adaptive multi-feed dielectric antenna system and methods for use therewith
US10359749B2 (en) 2016-12-07 2019-07-23 At&T Intellectual Property I, L.P. Method and apparatus for utilities management via guided wave communication
US10446936B2 (en) 2016-12-07 2019-10-15 At&T Intellectual Property I, L.P. Multi-feed dielectric antenna system and methods for use therewith
US10139820B2 (en) 2016-12-07 2018-11-27 At&T Intellectual Property I, L.P. Method and apparatus for deploying equipment of a communication system
US10103422B2 (en) 2016-12-08 2018-10-16 At&T Intellectual Property I, L.P. Method and apparatus for mounting network devices
US10326689B2 (en) 2016-12-08 2019-06-18 At&T Intellectual Property I, L.P. Method and system for providing alternative communication paths
US9998870B1 (en) 2016-12-08 2018-06-12 At&T Intellectual Property I, L.P. Method and apparatus for proximity sensing
US10389037B2 (en) 2016-12-08 2019-08-20 At&T Intellectual Property I, L.P. Apparatus and methods for selecting sections of an antenna array and use therewith
US10069535B2 (en) 2016-12-08 2018-09-04 At&T Intellectual Property I, L.P. Apparatus and methods for launching electromagnetic waves having a certain electric field structure
US9911020B1 (en) 2016-12-08 2018-03-06 At&T Intellectual Property I, L.P. Method and apparatus for tracking via a radio frequency identification device
US10340983B2 (en) 2016-12-09 2019-07-02 At&T Intellectual Property I, L.P. Method and apparatus for surveying remote sites via guided wave communications
US9838896B1 (en) 2016-12-09 2017-12-05 At&T Intellectual Property I, L.P. Method and apparatus for assessing network coverage
US10264586B2 (en) 2016-12-09 2019-04-16 At&T Mobility Ii Llc Cloud-based packet controller and methods for use therewith
US10458792B2 (en) * 2016-12-15 2019-10-29 Novatel Inc. Remote survey system
US20180180416A1 (en) * 2016-12-23 2018-06-28 Topcon Positioning Systems, Inc. Enhanced remote surveying systems and methods
CN106707287A (en) * 2016-12-23 2017-05-24 浙江大学 Fish school quantity estimation method based on extended Kalman filtering combined with nearest neighbor clustering algorithm
KR20180087947A (en) 2017-01-26 2018-08-03 삼성전자주식회사 Modeling method and modeling apparatus using 3d point cloud
US9973940B1 (en) 2017-02-27 2018-05-15 At&T Intellectual Property I, L.P. Apparatus and methods for dynamic impedance matching of a guided wave launcher
US10298293B2 (en) 2017-03-13 2019-05-21 At&T Intellectual Property I, L.P. Apparatus of communication utilizing wireless network devices
US10097241B1 (en) 2017-04-11 2018-10-09 At&T Intellectual Property I, L.P. Machine assisted development of deployment site inventory
CN106997055B (en) * 2017-05-27 2019-09-24 金华航大北斗应用技术有限公司 Beidou MEO satellite signal fitting method based on least square and gradient descent method
WO2018235923A1 (en) * 2017-06-21 2018-12-27 国立大学法人 東京大学 Position estimating device, position estimating method, and program
WO2019153855A1 (en) * 2018-02-07 2019-08-15 迎刃而解有限公司 Object information acquisition system capable of 360-degree panoramic orientation and position sensing, and application thereof
JP6478305B1 (en) * 2018-07-04 2019-03-06 有限会社ネットライズ Method and apparatus for measuring underground position of sheet pile applying SLAM method
CN109059941B (en) * 2018-07-06 2019-11-29 禾多科技(北京)有限公司 Characteristics map construction method, vision positioning method and corresponding intrument

Family Cites Families (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6195609B1 (en) * 1993-09-07 2001-02-27 Harold Robert Pilley Method and system for the control and management of an airport
US5642285A (en) 1995-01-31 1997-06-24 Trimble Navigation Limited Outdoor movie camera GPS-position and time code data-logging for special effects production
US6282362B1 (en) 1995-11-07 2001-08-28 Trimble Navigation Limited Geographical position/image digital recording and display system
US6147598A (en) 1997-07-03 2000-11-14 Trimble Navigation Limited Vehicle theft system including a handheld computing device
DE10025110C2 (en) 2000-05-20 2003-01-16 Zsp Geodaetische Sys Gmbh Method and device for realizing an information and data flow for geodetic devices
JP4672175B2 (en) * 2000-05-26 2011-04-20 本田技研工業株式会社 Position detection apparatus, position detection method, and position detection program
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7248285B2 (en) 2001-03-30 2007-07-24 Intel Corporation Method and apparatus for automatic photograph annotation
US7009561B2 (en) * 2003-03-11 2006-03-07 Menache, Llp Radio frequency motion tracking system and method
JP4004316B2 (en) 2002-03-20 2007-11-07 株式会社トプコン Surveying device and method for acquiring image data using surveying device
US7162338B2 (en) * 2002-12-17 2007-01-09 Evolution Robotics, Inc. Systems and methods for computing a relative pose for global localization in a visual simultaneous localization and mapping system
US7204596B2 (en) 2003-09-19 2007-04-17 Nec Corporation Projector with tilt angle measuring device
JP4253239B2 (en) 2003-10-07 2009-04-08 富士重工業株式会社 Navigation system using image recognition
US20050209815A1 (en) 2004-03-02 2005-09-22 Russon Virgil K Method, system, and computer-readable medium for user-assignment of geographic data to an image file
US7650013B2 (en) 2004-11-15 2010-01-19 Mobilerobots Inc. System and method for map and position-determination enhancement
US7860301B2 (en) * 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
WO2006099059A2 (en) 2005-03-10 2006-09-21 Witten Technologies, Inc. Method for correcting a 3d location measured by a tracking system assuming a vertical offset
CN101709962B (en) 2005-09-12 2013-07-17 特里伯耶拿有限公司 Surveying instrument and method of providing survey data using a surveying instrument
US9134127B2 (en) 2011-06-24 2015-09-15 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
US9109889B2 (en) 2011-06-24 2015-08-18 Trimble Navigation Limited Determining tilt angle and tilt direction using image processing
US7541974B2 (en) 2005-12-15 2009-06-02 Trimble Navigation Limited Managed traverse system and method to acquire accurate survey data in absence of precise GPS data
JP4984650B2 (en) * 2006-05-30 2012-07-25 トヨタ自動車株式会社 Mobile device and self-position estimation method of mobile device
CN100451543C (en) 2006-08-22 2009-01-14 邓业灿 Test method of pile skew simulation of straight pile
JP4800163B2 (en) * 2006-09-29 2011-10-26 株式会社トプコン Position measuring apparatus and method
CN101196395A (en) 2006-12-04 2008-06-11 李正才 Measurer for tiny inclination angle
US7719467B2 (en) * 2007-03-08 2010-05-18 Trimble Navigation Limited Digital camera with GNSS picture location determination
EP1970005B1 (en) * 2007-03-15 2012-10-03 Xsens Holding B.V. A system and a method for motion tracking using a calibration unit
JP2008261755A (en) 2007-04-12 2008-10-30 Canon Inc Information processing apparatus and information processing method
JP5380789B2 (en) * 2007-06-06 2014-01-08 ソニー株式会社 Information processing apparatus, information processing method, and computer program
JP2008304269A (en) * 2007-06-06 2008-12-18 Sony Corp Information processor, information processing method, and computer program
JP5184830B2 (en) * 2007-07-06 2013-04-17 株式会社トプコン Position measuring apparatus and position measuring method
US20090024325A1 (en) 2007-07-19 2009-01-22 Scherzinger Bruno M AINS enhanced survey instrument
US20090093959A1 (en) * 2007-10-04 2009-04-09 Trimble Navigation Limited Real-time high accuracy position and orientation system
US8629905B2 (en) 2008-02-12 2014-01-14 Trimble Ab Localization of a surveying instrument in relation to a ground mark
EP2240740B1 (en) 2008-02-12 2014-10-08 Trimble AB Localization of a surveying instrument in relation to a ground mark
DE112008003711T5 (en) 2008-02-22 2010-11-25 Trimble Jena Gmbh Angle measuring device and method
EP2247922B1 (en) 2008-02-29 2015-03-25 Trimble AB Determining coordinates of a target in relation to a survey instrument having at least two cameras
JP4930443B2 (en) * 2008-04-10 2012-05-16 トヨタ自動車株式会社 Map data generation apparatus and map data generation method
US20090262974A1 (en) 2008-04-18 2009-10-22 Erik Lithopoulos System and method for obtaining georeferenced mapping data
CN101567121A (en) 2008-04-24 2009-10-28 深圳富泰宏精密工业有限公司;奇美通讯股份有限公司 Hand-hold mobile electronic device loss detecting system and method
US9857475B2 (en) * 2008-09-09 2018-01-02 Geooptics, Inc. Cellular interferometer for continuous earth remote observation (CICERO)
KR101538775B1 (en) * 2008-09-12 2015-07-30 삼성전자 주식회사 Apparatus and method for localization using forward images
US8442304B2 (en) * 2008-12-29 2013-05-14 Cognex Corporation System and method for three-dimensional alignment of objects using machine vision
US8379929B2 (en) 2009-01-08 2013-02-19 Trimble Navigation Limited Methods and apparatus for performing angular measurements
CN102341812B (en) 2009-01-08 2014-08-27 天宝导航有限公司 Methods and systems for determining angles and locations of points
US7991575B2 (en) 2009-01-08 2011-08-02 Trimble Navigation Limited Method and system for measuring angles based on 360 degree images
US8229166B2 (en) 2009-07-07 2012-07-24 Trimble Navigation, Ltd Image-based tracking
TWI418210B (en) 2010-04-23 2013-12-01 Alpha Imaging Technology Corp Image capture module and image capture method for avoiding shutter lag
JP6002126B2 (en) 2010-06-25 2016-10-05 トリンブル ナビゲーション リミテッドTrimble Navigation Limited Method and apparatus for image-based positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Moritz Köhler, Shwetak N. Patel, Jay W. Summet, Erich P. Stuntebeck und Gregory D. Abowd, Institute for Pervasisve Computing, Department of Computer Science ETH Zürich, 8092 Zürich, Schweiz, "TrackSense: Infrastructure Free Precise Indoor Positioning Using Projected Patterns"
Thomas Lemaire, Cyrille Berger, Il-Kyun Jung und Simon Lacroix, "Vision-Based SLAM: Stereo and Monocular Approaches", International Journal of Computer Vision 74(3), 343-364, 2007

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013020307A1 (en) 2013-12-04 2014-08-14 Daimler Ag Method for determining the position of mobile object in multi-dimensional space, involves decoding emitted encoded signal with movement of object in transmitter unit and determining received positional information of object
DE102018100738A1 (en) 2018-01-15 2019-07-18 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Selective feature extraction

Also Published As

Publication number Publication date
US8754805B2 (en) 2014-06-17
JP2013535013A (en) 2013-09-09
CN103119611B (en) 2016-05-11
WO2011163454A1 (en) 2011-12-29
JP6002126B2 (en) 2016-10-05
CN103119611A (en) 2013-05-22
US9683832B2 (en) 2017-06-20
US20140267700A1 (en) 2014-09-18
US20120163656A1 (en) 2012-06-28

Similar Documents

Publication Publication Date Title
Robertson et al. An Image-Based System for Urban Navigation.
US6664529B2 (en) 3D multispectral lidar
KR101750469B1 (en) Hybrid photo navigation and mapping
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
US8107722B2 (en) System and method for automatic stereo measurement of a point of interest in a scene
US8938257B2 (en) Logo detection for indoor positioning
Pfeifer et al. Geometrical aspects of airborne laser scanning and terrestrial laser scanning
Zhao et al. A vehicle-borne urban 3-D acquisition system using single-row laser range scanners
US9400941B2 (en) Method of matching image features with reference features
US9953438B2 (en) Image annotation on portable devices
US9109889B2 (en) Determining tilt angle and tilt direction using image processing
Werner et al. Indoor positioning using smartphone camera
US8031933B2 (en) Method and apparatus for producing an enhanced 3D model of an environment or an object
US8989502B2 (en) Image-based georeferencing
US9602974B2 (en) Dead reconing system based on locally measured movement
Vosselman Fusion of laser scanning data, maps, and aerial photographs for building reconstruction
Tao Mobile mapping technology for road network data acquisition
US8229166B2 (en) Image-based tracking
DE112011100458T5 (en) Systems and methods for processing mapping and modeling data
Goncalves et al. A visual front-end for simultaneous localization and mapping
TWI483215B (en) Augmenting image data based on related 3d point cloud data
US9910158B2 (en) Position determination of a cellular device using carrier phase smoothing
US8897541B2 (en) Accurate digitization of a georeferenced image
US9880286B2 (en) Locally measured movement smoothing of position fixes based on extracted pseudoranges
US9465129B1 (en) Image-based mapping locating system

Legal Events

Date Code Title Description
R012 Request for examination validly filed