WO2014126671A1 - Camera aided motion direction and speed estimation - Google Patents
Camera aided motion direction and speed estimation Download PDFInfo
- Publication number
- WO2014126671A1 WO2014126671A1 PCT/US2014/011687 US2014011687W WO2014126671A1 WO 2014126671 A1 WO2014126671 A1 WO 2014126671A1 US 2014011687 W US2014011687 W US 2014011687W WO 2014126671 A1 WO2014126671 A1 WO 2014126671A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mobile device
- images
- motion
- misalignment angle
- instructions
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
Definitions
- the present disclosure relates to the field of wireless communications.
- the present disclosure relates to determining position characteristics of a mobile device.
- Various mobile device applications leverage knowledge of the position of the device.
- the position of a mobile device is identified via motion tracking with respect to the device.
- motion direction is determined using the orientation of the device sensors in relation to the direction of forward motion.
- the angle between the orientation of the mobile device and the forward motion direction is referred to as the alignment angle or misalignment angle.
- the orientation of the mobile device may change frequently, which may in turn change the misalignment angle of the mobile device frequently, and may adversely affect the user experience of such mobile applications.
- a method of determining position characteristics of a mobile device comprises capturing a plurality of images that represent views from the mobile device, adjusting perspectives of the plurality of images based at least in part on an orientation of the mobile device, determining a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and storing the misalignment angle and the direction of motion in a storage device.
- the method further comprises applying the misalignment angle and a confidence of the misalignment angle to navigate a user of the mobile device.
- the method of adjusting perspectives of the plurality of images comprises at least one of: adjusting perspectives of the plurality of images based on the orientation of the mobile device calculated using data collected from one or more sensors; compensating for perspectives of the plurality of images using an area near centers of the plurality of images; and compensating for perspectives of the plurality of images based at least in part on a weighted average of locations of features in the plurality of images.
- the method of determining the misalignment angle comprises tracking features from the plurality of images, estimating direction of motion of the mobile device, estimating the orientation of the mobile device using sensor data, and determining the misalignment angle based at least in part on the direction of motion and the orientation of the mobile device.
- the method of tracking features from the plurality of images comprises rejecting outliers in features of the plurality of images to eliminate at least one moving object in the plurality of images.
- the method further comprises at least one of: determining a confidence of the misalignment angle with respect to the direction of motion of the mobile device using information provided by a gyroscope of the mobile device, determining the confidence of the misalignment angle with respect to the direction of motion of the mobile device using information provided by a magnetometer of the mobile device, and determining the confidence of the misalignment angle with respect to the direction of motion of the mobile device using features of the plurality of images.
- the method further comprises determining a speed estimation of the mobile device, determining a confidence of the speed estimation of the mobile device, and applying the speed estimation and the confidence of the speed estimation to navigate a user of the mobile device.
- the method of determining a speed estimate comprises extracting features from the plurality of images, computing an average displacement of the mobile device using the features from the plurality of images, and computing the speed estimate based at least in part on the average displacement of the mobile device.
- the method of computing the speed estimate comprises comparing the features from the plurality of images, determining a separation of pixels between two consecutive images in the plurality of images, determining a time interval between the two consecutive images, and calculating the speed estimate of the mobile device in accordance with the separation of pixels and the time interval between the two consecutive images.
- the method further comprises calibrating a height of the mobile device according to at least one of GPS location information and WIFI location information of the mobile device.
- an apparatus comprises a control unit including processing logic, where the processing logic comprises logic configured to capture a plurality of images that represent views from the mobile device, logic configured to adjust perspectives of the plurality of images based at least in part on an orientation of the mobile device, logic configured to determine a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and logic configured to store the misalignment angle and the direction of motion in a storage device.
- the processing logic comprises logic configured to capture a plurality of images that represent views from the mobile device, logic configured to adjust perspectives of the plurality of images based at least in part on an orientation of the mobile device, logic configured to determine a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and logic configured to store the misalignment angle and the direction of motion in a storage device.
- a computer program product comprising non-transitory medium storing instructions for execution by one or more computer systems, the instructions comprise instructions for capturing a plurality of images that represent views from the mobile device, instructions for adjusting perspectives of the plurality of images based at least in part on an orientation of the mobile device, instructions for determining a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and instructions for storing the misalignment angle and the direction of motion in a storage device.
- a system comprises means for capturing a plurality of images that represent views from the mobile device, means for adjusting perspectives of the plurality of images based at least in part on an orientation of the mobile device, means for determining a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and means for storing the misalignment angle and the direction of motion in a storage device.
- FIG. 1 A illustrates an example of misalignment angle with respect to direction of motion of a mobile device according to some aspects of the present disclosure.
- FIG. IB illustrates a side view of the mobile device 102 of FIG. 1A.
- FIG. 2A - 2C illustrate a method of performing perspective
- FIG. 3A and FIG. 3B illustrate a method of detecting and removing moving objects according to some aspects of the present disclosure.
- FIG. 4 illustrates a block diagram of an exemplary mobile device according to some aspects of the present disclosure.
- FIG. 5A illustrates a method of estimating speed and misalignment angle of a mobile device according to some aspects of the present disclosure.
- FIG. 5B illustrates a method of estimating speed of a mobile device according to some aspects of the present disclosure.
- FIG. 6 illustrates an exemplary method of determining position characteristics of a mobile device according to some aspects of the present disclosure.
- FIG. 7A illustrates a method of monitoring motion, misalignment angle and confidence associated with the misalignment angle for a mobile device over a period of time according to some aspects of the present disclosure.
- FIG. 7B illustrates a graph showing changes to an average misalignment angle of a mobile device over a period of time according to some aspects of the present disclosure.
- FIG. 1 A illustrates an example of misalignment angle with respect to direction of motion of a mobile device according to some aspects of the present disclosure.
- direction of motion 101 of a mobile device 102 may be different from the mobile device orientation 103.
- the misalignment angle 105 is the angle between the mobile device orientation 103 and the direction of motion 101. Note that in some publications, the misalignment angle 105 may also be referred to as an alignment angle. The ability to predict the misalignment angle 105 can be useful for pedestrian navigation applications.
- the mobile device may include one or more camera(s) 108 and a display 112
- one or more cameras 108 of the mobile device 102 may be configured to capture image frames for determining the misalignment angle 105.
- the image captured may be shown to a user in display 112.
- FIG. IB illustrates a side view of the mobile device 102 of FIG. 1A.
- the mobile device 102 may be configured to use either front camera(s) 108a (located in the front side of the mobile device) or back camera(s) 108b (located in the back side of the mobile device) to capture image frames.
- the front camera(s) 108a can be configured to capture the field of view of an area above the mobile device 102
- the back camera(s) 108b can be configured to capture the field of view of an area below the mobile device 102.
- the mobile device 102 may be configured to use both the front camera(s) 108a and the back camera(s) 108b to capture image frames in both front view and back view of the mobile device 102.
- the mobile device orientation 103 features on the floor or on the ceiling may be gathered to estimate the direction of motion of the mobile device 102 with respect to the mobile device orientation 103.
- both of the front camera(s) 108a and the back camera(s) 108b may be used in parallel.
- errors caused by the two different perspectives in the front camera(s) 108a and the back camera(s) 108b may have opposite signs and may be compensated because perspectives of the front camera(s) 108a and the back camera(s) 108b are oriented 180 degrees apart from each other.
- either camera may be chosen based on which field of view has more features which are easier to track and based on which field of view has fewer moving objects.
- the mobile device 102 may be configured to use metrics to reject outliers since the image frames might contain features of moving parts. For example, one source of such moving parts may be the feet of the user. Another source of such moving parts may be the head of the user.
- FIG. 2A - 2C illustrate a method of performing perspective
- FIG. 2A the arrows (202a, 204a, 206a and 208a) illustrate an asymmetric distribution of features due to a possible camera pitch in a pedestrian navigation application. Such asymmetric distribution of features may lead to incorrect estimation for direction of motion of the mobile device 102.
- FIG. 2B illustrates the same asymmetric distribution of features (represented by the arrows) due to camera pitch in a pedestrian navigation application without the image background. The corresponding arrows are shown as 202b, 204b, 206b and 208b.
- FIG. 2C illustrate perspective adjusted distribution of features, represented by the
- the mobile device 102 may be configured to perform perspective correction based on angle of the mobile device to vertical as calculated using sensors, such as magnetometer, accelerometer, and gyroscope.
- the mobile device 102 may be configured to use features near the center of an image frame in computing direction of motion of the mobile device 102.
- a weighted average of features based on location for example more weight for features near center may be employed in computing direction of motion of the mobile device 102.
- features along the center of an image, represented by arrow 202a may be assigned a weigh of 1 (100%), features represented by arrow 204a may be assigned a weight of .8 (80%), features represented by arrow 206a may be assigned a weight of .6 (60%)), features represented by arrow 206a may be assigned a weight of .4 (40%>), and so on.
- identifying and tracking features in image frames may be performed using a number of techniques.
- a method of identifying features may be performed by examining the minimum eigenvalue of each 2 by 2 gradient matrix. Then the features are tracked using a Newton- aphson method of minimizing the difference between the two windows.
- the method of multi-resolution tracking allows for relatively large displacements between images. Note that during tracking of features from one frame to the next frame, errors may accumulate.
- the mobile device 102 may be configured to monitor whether the image signal in the window around the feature in the current frame is still similar to the image signal around the feature in the previous frame. Since features may be tracked over many frames, the image content may be deformed. To address this issue, consistency check may be performed with a similarity or an affine mapping.
- points on the object may be extracted to provide feature descriptions (also referred to as keypoints, feature points or features for short) of the object.
- This description, extracted from a training image may then be used to identify the object when attempting to locate the object in a test image containing many other objects.
- the features extracted from the training image may be detectable even under changes in image scale, noise and illumination. Such points usually lie on high-contrast regions of the image, such as object edges.
- SIFT detects and uses a larger number of features from the images, which can reduce the contribution of the errors caused by the local variations in the average error of all feature matching errors.
- the disclosed method may identify objects even among clutter and under partial occlusion; because the SIFT feature descriptor can be invariant to uniform scaling, orientation, and partially invariant to affine distortion and illumination changes.
- keypoints of an object may first be extracted from a set of reference images and stored in a database.
- An object is recognized in a new image by comparing each feature from the new image to this database and finding candidate matching features based on Euclidean distance of their feature vectors. From the full set of matches, subsets of keypoints that agree on the object and its location, scale, and orientation in the new image may be identified to filter out good matches.
- the determination of consistent clusters may be performed by using a hash table
- Each cluster of 3 or more features that agree on an object and its pose may then be subject to further detailed model verification and subsequently outliers may be discarded.
- the probability that a particular set of features indicates the presence of an object may then be computed based on the accuracy of fit and number of probable false matches. Object matches that pass the tests can be identified as correct with high confidence.
- image feature generation transforms an image into a large collection of feature vectors, each of which may be invariant to image translation, scaling, and rotation, as well as invariant to illumination changes and robust to local geometric distortion.
- feature vectors each of which may be invariant to image translation, scaling, and rotation, as well as invariant to illumination changes and robust to local geometric distortion.
- Key locations may be defined as maxima and minima of the result of difference of Gaussians function applied in scale space to a series of smoothed and resampled images. Low contrast candidate points and edge response points along an edge may be discarded.
- Dominant orientations are assigned to localized keypoints. This approach ensures that the keypoints are more stable for matching and recognition. SIFT descriptors robust to local affine distortion may then be obtained by considering pixels around a radius of the key location, blurring and resampling of local image orientation planes.
- features matching and indexing may include storing SIFT keys and identifying matching keys from the new image.
- a modification of the k-d tree algorithm which is also referred to as the best-bin-first search method that may be used to identify the nearest neighbors with high probability using a limited amount of computation.
- the best-bin-first algorithm uses a modified search ordering for the k-d tree algorithm so that bins in feature space may be searched in the order of their closest distance from the query location. This search order requires the use of a heap-based priority queue for efficient determination of the search order.
- the best candidate match for each keypoint may be found by identifying its nearest neighbor in the database of keypoints from training images.
- the nearest neighbors can be defined as the keypoints with minimum Euclidean distance from the given descriptor vector.
- the probability that a match is correct can be determined by taking the ratio of distance from the closest neighbor to the distance of the second closest.
- matches in which the distance ratio is greater than 0.8 may be rejected, which eliminates 90% of the false matches while discarding less than 5% of the correct matches.
- search may be cut off after checking a predetermined number (for example 100) nearest neighbor candidates. For a database of 100,000 keypoints, this may provide a speedup over exact nearest neighbor search by about 2 orders of magnitude, yet results in less than a 5% loss in the number of correct matches.
- the Hough Transform may be used to cluster reliable model hypotheses to search for keys that agree upon a particular model pose.
- Hough transform may be used to identify clusters of features with a consistent interpretation by using each feature to vote for object poses that may be consistent with the feature. When clusters of features are found to vote for the same pose of an object, the probability of the interpretation being correct may be higher than for any single feature.
- An entry in a hash table may be created to predict the model location, orientation, and scale from the match hypothesis. The hash table can be searched to identify clusters of at least 3 entries in a bin, and the bins may be sorted into decreasing order of size.
- each of the SIFT keypoints may specify 2D location, scale, and orientation.
- each matched keypoint in the database may have a record of its parameters relative to the training image in which it is found.
- the similarity transform implied by these 4 parameters may be an approximation to the 6 degree-of-freedom pose space for a 3D object and also does not account for any non-rigid deformations. Therefore, an exemplary
- each keypoint match may vote for the 2 closest bins in each dimension, giving a total of 16 entries for each hypothesis and further broadening the pose range.
- FIG. 3A and FIG. 3B illustrate a method of detecting and removing moving objects according to some aspects of the present disclosure.
- FIG. 3A illustrates an image frame 302 that includes user's feet 304a and 304b as moving objects.
- FIG. 3B illustrates another image frame 306 that includes user's face 308 as a moving object.
- the user's feet 304a and 304b and the user's face 308 may be considered as outliers; and features of such outliers may be removed from the image frames captures according to the methods described in the following sections.
- outliers may be removed by checking for agreement between each image feature and the model, for a given parameter solution. For example, given a linear least squares solution, each match may be required to agree within half the error range that is used for the parameters in the Hough transform bins. As outliers are discarded, the linear least squares solution may be resolved with the remaining points, and the process may be iterated. In some implementations, if less than a predetermined number of points (e.g. 3 points) remain after discarding outliers, the match may be rejected. In addition, a top-down matching phase may be used to add any further matches that agree with the projected model position, which may have been missed from the Hough transform bin due to the similarity transform approximation or other errors.
- a predetermined number of points e.g. 3 points
- the decision to accept or reject a model hypothesis can be based on a detailed probabilistic model.
- the method first computes an expected number of false matches to the model pose, given the projected size of the model, the number of features within the region, and the accuracy of the fit.
- a Bayesian probability analysis can then give the probability that the object may be present based on the actual number of matching features found.
- a model may be accepted if the final probability for a correct interpretation is greater than a predetermined percentage (for example 95%).
- rotation invariant feature transform (RIFT) method may be employed as a rotation-invariant generalization of SIFT to address under clutter or partial occlusion situations.
- the RIFT descriptor may be constructed using circular normalized patches divided into concentric rings of equal width and within each ring a gradient orientation histogram may be computed. To maintain rotation invariance, the orientation may be measured at each point relative to the direction pointing outward from the center.
- G-RIF generalized robust invariant feature
- the G-RIF encodes edge orientation, edge density and hue information in a unified form combining perceptual information with spatial encoding.
- the object recognition scheme uses neighboring context based voting to estimate object models.
- a speeded up robust feature (SURF) method may be used which uses a scale and rotation-invariant interest point detector / descriptor that can outperform previously proposed schemes with respect to repeatability, distinctiveness, and robustness.
- SURF relies on integral images for image convolutions to reduce computation time, and builds on the strengths of the leading existing detectors and descriptors (using a fast Hessian matrix -based measure for the detector and a distribution-based descriptor).
- the SURF method describes a distribution of Haar wavelet responses within the interest point neighborhood. Integral images may be used for speed, and 64 dimensions may be used to reduce the time for feature computation and matching.
- the indexing step may be based on the sign of the Laplacian, which increases the matching speed and the robustness of the descriptor.
- the PCA-SIFT descriptor is a vector of image gradients in x and y direction computed within the support region.
- the gradient region can be sampled at 39x39 locations.
- the vector can be of dimension 3042.
- the dimension can be reduced to 36 with PC A.
- the Gradient location-orientation histogram (GLOH) method can be employed, which is an extension of the SIFT descriptor designed to increase its robustness and distinctiveness.
- the SIFT descriptor can be computed for a log-polar location grid with three bins in radial direction (the radius set to 6, 11, and 15) and 8 in angular direction, which results in 17 location bins.
- the central bin is not divided in angular directions.
- the gradient orientations may be quantized in 16 bins resulting in 272 bin histogram.
- the size of this descriptor can be reduced with PCA.
- the covariance matrix for PCA can be estimated on image patches collected from various images. The 128 largest eigenvectors may then be used for description.
- a two-object recognition algorithm may be employed to use with the limitations of current mobile devices.
- the Features from Accelerated Segment Test (FAST) corner detector can be used for feature detection.
- FAST Accelerated Segment Test
- This approach distinguishes between the off-line preparation phase where features may be created at different scale levels and the on-line phase where features may be created at a current fixed scale level of the mobile device's camera image.
- features may be created from a predetermined fixed patch size (for example 15x15 pixels) and form a SIFT descriptor with 36 dimensions.
- the approach can be further extended by integrating a Scalable Vocabulary Tree in the recognition pipeline. This allows an efficient recognition of a larger number of objects on mobile devices.
- the detection and description of local image features can help in object recognition.
- the SIFT features can be local and based on the appearance of the object at particular interest points, and may be invariant to image scale and rotation. They may also be robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, the features may be highly distinctive, relatively easy to extract and allow for correct object identification with low probability of mismatch.
- the features can be relatively easy to match against a (large) database of local features, and generally probabilistic algorithms such as k-dimensional (k-d) trees with best-bin-first search may be used.
- Object descriptions by a set of SIFT features may also be robust to partial occlusion. For example, as few as 3 SIFT features from an object may be sufficient to compute its location and pose. In some implementations, recognition may be performed in quasi real time, for small databases and on modern computer hardware.
- the random sample consensus (RANSAC) technique may be employed to remove outliers caused by moving objects in view of the camera.
- RANSAC uses an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers. This method is a non-deterministic as it produces a reasonable result with an associated probability, where the probability may increase as more iteration is performed.
- a set of observed data values a parameterized model which can be fitted to the observations with corresponding confidence parameters.
- the method iteratively selects a random subset of the original data. These data can be hypothetical inliers and these hypothesis may then be tested as follows:
- a model is fitted to the hypothetical inliers, i.e. all free parameters of the model are reconstructed from the inliers.
- the estimated model can be considered acceptable if sufficiently number of points have been classified as hypothetical inliers. 4.
- the model is re-estimated from all hypothetical inliers, because it has only been estimated from the initial set of hypothetical inliers.
- the model is evaluated by estimating the error of the inliers relative to the model.
- the above procedure can be repeated for a predetermined number of times, each time producing either a model which may be rejected because too few points are classified as inliers or a refined model together with a corresponding error measure. In the latter case, the refined model is kept if the error is lower than the previously saved model.
- moving objects in view of the camera can be actively identified and removed using a model based motion tracking method.
- the objective of tracking can be treated as a problem of model recognition.
- a binary representation of the target can be tracked, and a Hausdorff distance based search is used to search regions of the image for the object.
- output from the standard canny edge detector of the Gaussian smoothed image is augmented with the notion of a model history.
- a Hausdorff search can be performed on each target, using the canny edges from the current image and the current model.
- an affine estimation may be performed to approximate the net background motion.
- information can be gathered about the target, and be used to approximate the motion of the target, as well as separate the background from motion in the region of the target.
- history data about the target may be retained, such as the target's past motion and size change, characteristic views of the target (snapshots throughout time that provide an accurate representation of the different ways the target has been tracked), and match qualities in the past.
- the history of tracking the target can be useful in more than just aiding hazard/unusual conditions; that part of a solid motion tracking method can involve history data, and not just a frame by frame method of motion comparison.
- This history state can provide information regarding how to decide what should be considered part of the target (e.g. things moving close to the object moving at the same speed should be incorporated into the object), and with information about motion and size, the method can predictively estimate where a lost object may have gone, or where it might reappear (which has been useful in recovering targets that leave the frame and reappear later in time).
- An inherent challenge in the motion tracking method may be caused by the fact that the camera can have an arbitrary movement (as opposed to a stationary camera), which makes developing a tracking system that can handle unpredictable changes in camera motion difficult.
- a computationally efficient affine background estimation scheme may be used to provide information as to the motion of the camera and scene.
- an affine transformation for the image can be performed at time t to the image at time t+dt, which allows correlating the motion in the two images.
- This background information allows the method to synthesize an image at time t+dt from the image at time t and the affine transform that can be an approximation of the net scene motion.
- This synthesized image can be useful in generating new model information and removing background clutter from the model space, because a difference of the actual image at t+dt and the generated image at t+dt can be taken to remove image features from the space surrounding targets.
- the affine transform In addition to the use of the affine transform as a tool to clean-up the search space, it is also used to normalize the coordinate movement of the targets: by having a vector to track how the background may be moving, and a vector to track how the target may be moving, a difference of the two vector may be taken to generate a vector that describes the motion of the target with respect to the background.
- This vector allows the method to predictively match where the target should be, and anticipate hazard conditions (for example looking ahead in the direction of the motion can provide clues about upcoming obstacles, as well as keeping track of where the object may be in case of a hazard condition.
- the method may still be able to estimate the background motion, and use that coupled with the knowledge of the model's previous movements to guess where the model may reappear, or re-enter the frame.
- the background estimation has been a key factor in the prolonged tracking of objects. Note that short term tracking may be performed without background estimation, but after a period of time, object distortion and hazards may be difficult to cope with effectively without a good estimation of the background.
- one of the advantages of using the Hausdorff distance as a matching operator is that it can be quite tolerant of changes in shape during matching, but using the Hausdorff distance as a matching operator may require the objects being tracked be more accurately defined.
- the mobile device can be configured to perform smoothing/feature extraction, Hausdorff matching each target (for example one match per model), as well as affine background estimation. Each of these operations can be quite computationally expensive individually. In order to achieve real-time performance on a mobile device, the design can be configured to use as much parallelism as possible.
- FIG. 4 illustrates a block diagram of an exemplary mobile device according to some aspects of the present disclosure.
- the mobile device 102 includes camera(s) 108 for capturing images of the environment, which may be either individual photos or frames of video.
- the mobile device 102 may also include sensors 116, which may be used to provide data with which the mobile device 102 can determine its position and orientation, i.e., pose. Examples of sensors that may be used with the mobile device 102 include accelerometers, quartz sensors, gyros, micro- electromechanical system (MEMS) sensors used as linear accelerometers, as well as magnetometers.
- MEMS micro- electromechanical system
- the mobile device 102 may also include a user interface 110 that includes display 112 capable of displaying images.
- the user interface 110 may also include a keypad 114 or other input device through which the user can input information into the mobile device 102. If desired, the keypad 114 may be obviated by integrating a virtual keypad into the display 112 with a touch sensor.
- the user interface 110 may also include a microphone 117 and one or more speakers 118, for example, if the mobile device is a cellular telephone.
- mobile device 102 may include other components unrelated to the present disclosure.
- the mobile device 102 further includes a control unit 120 that is connected to and communicates with the camera(s) 108 and sensors 116, as well as the user interface 110, along with any other desired features.
- the control unit 120 may be provided by one or more processors 122 and associated memory/storage 124.
- the control unit 120 may also include software 126, as well as hardware 128, and firmware 130.
- the control unit 120 includes a misalignment angle computation module 132 configured to the misalignment angle between the orientation and the direction of motion of the mobile device 102.
- the control unit further includes motion direction tracking module 133 configured to track the direction of motion of the mobile device 102 (which may indicate the direction of motion of the user).
- the control unit 120 further includes speed computation module 134 configured to compute the speed of the mobile device 102 (which may indicate the speed of the user).
- the misalignment angle computation module 132, the motion direction tracking module 133, and the speed computation module 134 are illustrated separately from processor 122 and/or hardware 128 for clarity, but may be combined and/or implemented in the processor 122 and/or hardware 128 based on instructions in the software 126 and the firmware 130.
- FIG. 5A illustrates a method of estimating speed and misalignment angle of a mobile device according to some aspects of the present disclosure.
- the method starts in block 502, and thereafter moves to block 504 where the method collects sensor data.
- the sensor data may include image frames captured by one or more cameras 108a and 108b as described in FIG. IB.
- the sensor data may also include data collected by accelerometer, gyroscope, magnetometer, and/or by a wireless receiver of the mobile device 102.
- the method extracts features from image frames captured.
- the method performs perspective compensation. According to aspects of the present disclosure, examples of performing perspective compensation are provided in association with the descriptions of FIG.
- the method compares features between two consecutive image frames to determine separations in pixels between the two consecutive image frames. According to aspects of the present disclosure, methods of extracting and comparing features in image frames are provided in association with the descriptions of FIG. 2A - FIG. 2C. In block 512, the method computes magnitude and angle of displacement based at least in part on the comparison of features between image frames performed in block 510.
- the method computes distance traversed between image frames.
- the method computes speed of the mobile device according to the methods described in association with FIG. 5B.
- the method determines direction of motion of the mobile device based at least in part on the angle of displacement determined in block 512.
- the method determines misalignment angle between the orientation and direction of motion of the mobile device based at least in part on the direction of motion determined in block 518 and the orientation determined by data obtained by one or more sensors of the mobile device 102. The method ends in block 520.
- FIG. 5B illustrates a method of estimating speed of a mobile device according to some aspects of the present disclosure.
- the method converts a displacement in pixel in the image frames to a displacement of distance in meters. Then, the method determines the field of view of one or more cameras (108a and/or 108b) and the height of the one or more cameras being held above the ground. Note that due to perspective projection, the same distance can correspond to different number pixels in the image frames captured, depending on how far away an object may be from the perspective of camera of the mobile device.
- a default height of the mobile device 102 being held above the ground may be estimated. In some other applications, the height may be determined from the GPS location or from other positioning estimates (such as WiFi, Bluetooth measurements) of the mobile device 102.
- d represents the size of the sensor in the direction measured.
- the object image may occupy entire sensor if object angular size as seen from camera equals camera angle of view, with such physical object size being d*Sl/S2.
- MaxPix that can be, for example 720 or 1080 if images are obtained in Video mode
- Alpha/MaxPix includes a combination of known camera parameters, object image displacement Npix and travel time T can be measured, and distance of the camera from the floor H can be either assumed or calibrated when the user speed can be known, for example from GNSS or WiFi location.
- FIG. 6 illustrates an exemplary method of determining position characteristics of a mobile device according to some aspects of the present disclosure.
- the control unit 120 can be configured to capture a plurality of images that represent views from the mobile device.
- the control unit 120 can be configured to adjust perspectives of the plurality of images based at least in part on an orientation of the mobile device.
- the control unit 120 can be configured to determine a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images.
- the control unit 120 can be configured to store the misalignment angle and the direction of motion in a storage device.
- the methods performed in block 604 may further include methods performed in block 610.
- the control unit 120 can be configured to adjust perspectives of the plurality of images based on the orientation of the mobile device calculated using data collected from one or more sensors, compensate for perspectives of the plurality of images using an area near centers of the plurality of images, and/or compensate for perspectives of the plurality of images based at least in part on a weighted average of locations of features in the plurality of images.
- the methods performed in block 606 may further include methods performed in blocks 612 to 622.
- the control unit 120 can be configured to track features from the plurality of images, estimate direction of motion of the mobile device, estimate the orientation of the mobile device using sensor data, and determine the misalignment angle based at least in part on the direction of motion and the orientation of the mobile device.
- control unit 120 can be configured to perform at least one of: determine a confidence of the misalignment angle with respect to the direction of motion of the mobile device using information provided by a gyroscope of the mobile device, determine the confidence of the misalignment angle with respect to the direction of motion of the mobile device using information provided by a magnetometer of the mobile device, and determine the confidence of the misalignment angle with respect to the direction of motion of the mobile device using features of the plurality of images.
- control unit 120 can be configured to determine a speed estimation of the mobile device, determine a confidence of the speed estimation of the mobile device, and apply the speed estimation and the confidence of the speed estimation to navigate a user of the mobile device.
- control unit 120 can be configured to apply the misalignment angle and a confidence of the
- the methods performed in block 612 may further include methods performed in block 620.
- the control unit 120 can be configured to reject outliers in features of the plurality of images to eliminate at least one moving object in the plurality of images.
- the methods performed in block 616 may further include methods performed in block 622.
- the control unit 120 can be configured to extract features from the plurality of images, compute an average displacement of the mobile device using the features from the plurality of images, and compute the speed estimate based at least in part on the average displacement of the mobile device.
- control unit 120 can be further configured to compare the features from the plurality of images, determine a separation of pixels between two consecutive images in the plurality of images, determine a time interval between the two consecutive images, and calculate the speed estimate of the mobile device in accordance with the separation of pixels and the time interval between the two consecutive images.
- control unit 120 can be further configured to calibrate a height of the mobile device according to at least one of GPS location information and WIFI location information of the mobile device.
- the functions described in FIG. 5 A and FIG. 6 may be implemented by the control unit 120 of FIG. 4, potentially in combination with one or more other elements.
- the functions may be performed by processor 122, software 126, hardwarel28, and firmware 130 or a combination of the above to perform various functions of the apparatus described in the present disclosure.
- the functions described in FIG. 5 A and FIG. 6 may be implemented by the processor 122 in combination with one or more other elements, for example elements 108-118 and 124- 134 of FIG. 4.
- FIG. 7A illustrates a method of monitoring motion, misalignment angle and confidence associated with the misalignment angle for a mobile device over a period of time according to some aspects of the present disclosure.
- a first graph 702 shows the misalignment angle (also referred to as angle in FIG. 7A) between the orientation and the direction of motion of the mobile device 102 over a period of time, which is 80 seconds in this example.
- a second graph 704 shows the corresponding confidence value associated with the misalignment angle over the same period of time.
- a third graph 706 shows whether the mobile device 102 is in motion over the same period of time.
- the confidence estimate can be an input to a navigation algorithm which can be configured to tell whether the mobile device may rely on the misalignment angle determined at a point in time. As shown above, the confidence estimate may be derived from the feature tracking algorithm. In some applications, the confidence value can also be estimated by the gyroscope and magnetometer. For example, the gyroscope and magnetometer may be configured to indicate the confidence value of the misalignment angle may be reduced when the user is turning (i.e. changing direction of motion).
- the gyroscope value exceeds a threshold, it can be inferred that the user may be either moving the mobile device or turning; and in these cases, the confidence value may be reduced accordingly.
- the filter that computes averages of the misalignment angle may be reset. Note that the confidence metric illustrates how well features are being tracked. If the confidence value is low, the mobile device may be configured to use the angle from a previous time or from a different camera, such as switching from the front-facing camera to the back-facing camera or vice versa.
- FIG. 7B illustrates a graph showing changes to an average misalignment angle of a mobile device over a period of time according to some aspects of the present disclosure.
- the average misalignment angle (also referred to as average angle in FIG. 7B) is approximately -86 degrees.
- the average misalignment angle changes sharply, which typically indicates the user may be moving the mobile device to a different position or the user may be turning.
- the average misalignment angle is approximately -60 degrees. Such information may be used to assist positioning and pedestrian navigation applications.
- the method of monitoring misalignment angle and direction of motion can be alternatively or additionally performed as follows.
- the misalignment angle computation module 132 may be utilized to determine the angular offset between the orientation of the mobile device 102 and the direction of forward motion of the mobile device 102, as given by the motion direction tracking module 133.
- the misalignment angle can be defined by the angular difference between the direction of motion M of a mobile device 102 and the direction of orientation O of the mobile device.
- the direction of motion M of the mobile device 102 can be obtained in cases in which conventional motion direction techniques fail. More particularly, the misalignment angle can have a wide range of value (e.g., from 0 to 360 degrees) depending on the direction of orientation O of the mobile device 102. Without the misalignment angle even approximate conversion of device heading to motion direction can be challenging.
- the misalignment angle is utilized to facilitate positioning of the mobile device 102.
- a mobile device 102 can be equipped with a compass or other mechanisms to provide information indicating the heading of the mobile device 102, which can be defined as the direction at which the mobile device is oriented (e.g., in relation to magnetic north) within a given precision or tolerance amount.
- the compass heading of the mobile device 102 alone does not represent the direction in which the mobile device 102 is moved.
- misalignment angle can be utilized to convert the direction of orientation of the mobile device 102 to the direction of motion in the event that the mobile device 102 is not oriented in the direction of motion.
- the direction of motion in a compass-aided dead reckoning application can be computed as the compass heading plus the misalignment angle.
- the motion direction tracking module 133 and the misalignment angle computation module 132 can operate based on sensor data, information obtained from a step detector (not shown), etc., to determine the misalignment angle associated with movement of a mobile device 102 being carried by a pedestrian. Initially, based on data collected from accelerometer(s) and/or the step detector, pedestrian steps can be identified and the direction of gravity relative to the sensor axes of the mobile device 102 can be determined. These initial computations form a basis for the operation of the motion direction tracking module 133 and the misalignment angle computation module 132, as described below.
- the direction of motion changes within a given pedestrian step and between consecutive steps based on the biomechanics of pedestrian motion. For example, rather than proceeding in a constant forward direction, a moving pedestrian shifts left to right (e.g., left during a step with the left foot and right during a step with the right foot) with successive steps and vertically (e.g., up and down) within each step. Accordingly, transverse (lateral) acceleration associated with a series of pedestrian steps cycles between left and right with a two-step period while forward and vertical acceleration cycle with a one-step period.
- the motion direction tracking module 133 may include a step shifter, a step summation module and a step correlation module (not shown in FIG. 4).
- the motion direction tracking module 133 can leverage the above properties of pedestrian motion to isolate the forward component of motion from the vertical and transverse components.
- the motion direction tracking module 133 records acceleration information obtained from accelerometer(s) 28 (e.g., in a buffer) over consecutive steps.
- the motion direction tracking module 133 utilizes the step shifter and the step summation module to sum odd and even steps.
- the step shifter shifts acceleration data corresponding to a series of pedestrian steps in time by one step.
- the step summation module sums the original acceleration information with the shifted acceleration information.
- summing pedestrian steps after a one- step shift reduces transverse acceleration while having minimal impact on vertical or forward acceleration.
- transverse acceleration may not be symmetrical from step to step. Accordingly, while the step shifter and step summation module operate to reduce the transverse component of acceleration, these modules may not substantially eliminate the transverse acceleration. To enhance the removal of transverse acceleration, the step correlation module can further operate on the acceleration data obtained from the accelerometer(s).
- the step correlation module correlates vertical acceleration with horizontal acceleration shifted (by the step shifter) by one quarter step both forwards and backwards (e.g., +/- 90 degrees).
- the vertical/forward correlation may be comparatively strong due to the biomechanics of pedestrian motion, while the vertical/transverse correlation may be approximately zero.
- the correlations between vertical and horizontal acceleration shifted forward and backward by one quarter step are computed, and the forward shifted result may be subtracted from the backward shifted result (since the results of the two correlations are opposite in sign) to further reduce the transverse component of acceleration.
- the misalignment angle computation module 132 determines the angle between the forward component of acceleration and the orientation of the mobile device 102.
- the misalignment angle computation module 132 may include an Eigen analysis module and an angle direction inference module (not shown in FIG. 4).
- the misalignment angle computation module 132 identifies the misalignment angle via Eigen analysis, as performed by an Eigen analysis module, and further processing performed by an angle direction inference module.
- the Eigen analysis module determines the orientation of the sensor axes of the mobile device with respect to the earth, from which a line corresponding to the direction of motion of the mobile device 102 is obtained.
- the angle direction inference module analyzes the obtained line, as well as forward and vertical acceleration data corresponding to the corresponding pedestrian step(s), to determine the direction of the misalignment angle based on the direction of motion of the mobile device 102 (e.g., forward or backward along the obtained line). By doing so, the angle direction inference module operates to resolve forward/backward ambiguity associated with the misalignment angle.
- the angle direction inference module leverages the motion signature of a pedestrian step to determine the direction of the misalignment angle.
- forward and vertical acceleration corresponding to a pedestrian step are related due to the mechanics of leg rotation, body movement, and other factors associated with pedestrian motion.
- the angle direction inference module can be configured to utilize knowledge of these relationships to identify whether a motion direction is forward or backward along a given line.
- leg rotation and other associated movements during a pedestrian step can be classified as angular movements, e.g., measured in terms of pitch or roll. Accordingly, a gyroscope can be used to separate gravity from acceleration due to movement such that the reference frame for computation can be rotated to account for the orientation of the mobile device 102 prior to the calculations described above.
- FIG. 6 and their corresponding descriptions provide means for capturing a plurality of images that represent views from the mobile device, means for adjusting perspectives of the plurality of images based at least in part on an orientation of the mobile device, means for determining a misalignment angle with respect to a direction of motion of the mobile device using the plurality of images, and means for storing the misalignment angle and the direction of motion in a storage device.
- misalignment angle with respect to the direction of motion of the mobile device using information provided by a gyroscope of the mobile device; means for determining the confidence of the misalignment angle with respect to the direction of motion of the mobile device using information provided by a magnetometer of the mobile device; means for determining the confidence of the misalignment angle with respect to the direction of motion of the mobile device using features of the plurality of images; means for determining a speed estimation of the mobile device; means for determining a confidence of the speed estimation of the mobile device; and means for applying the speed estimation and the confidence of the speed estimation to navigate a user of the mobile device.
- the methodologies and mobile device described herein can be implemented by various means depending upon the application. For example, these methodologies can be implemented in hardware, firmware, software, or a combination thereof.
- the processing units can be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGAs field programmable gate arrays
- processors controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
- control logic encompasses logic implemented by software, hardware, firmware, or a combination.
- the methodologies can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein.
- Any machine readable medium tangibly embodying instructions can be used in implementing the methodologies described herein.
- software codes can be stored in a memory and executed by a processing unit.
- Memory can be implemented within the processing unit or external to the processing unit.
- memory refers to any type of long term, short term, volatile, nonvolatile, or other storage devices and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
- the functions may be stored as one or more instructions or code on a computer-readable medium. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media may take the form of an article of manufacturer. Computer-readable media includes physical computer storage media and/or other non-transitory media. A storage medium may be any available medium that can be accessed by a computer.
- such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
- instructions and/or data may be provided as signals on transmission media included in a communication apparatus.
- a communication apparatus may include a transceiver having signals indicative of instructions and data.
- the instructions and data are configured to cause one or more processors to implement the functions outlined in the claims. That is, the communication apparatus includes transmission media with signals indicative of information to perform disclosed functions. At a first time, the transmission media included in the communication apparatus may include a first portion of the information to perform the disclosed functions, while at a second time the transmission media included in the communication apparatus may include a second portion of the information to perform the disclosed functions.
- the disclosure may be implemented in conjunction with various wireless communication networks such as a wireless wide area network (WW AN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on.
- WW AN wireless wide area network
- WLAN wireless local area network
- WPAN wireless personal area network
- network and “system” are often used interchangeably.
- position and “location” are often used interchangeably.
- a WW AN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, a Long Term Evolution (LTE) network, a WiMAX (IEEE 802.16) network and so on.
- CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (Ay- CDMA), and so on.
- Cdma2000 includes IS-95, IS2000, and IS-856 standards.
- a TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT.
- GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project” (3 GPP).
- Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2).
- 3GPP and 3GPP2 documents are publicly available.
- a WLAN may be an IEEE 802.1 lx network
- a WPAN may be a
- Bluetooth network an IEEE 802.15x, or some other type of network.
- the techniques may also be implemented in conjunction with any combination of WW AN, WLAN and/or WPAN.
- a mobile station refers to a device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop or other suitable mobile device which is capable of receiving wireless communication and/or navigation signals.
- PCS personal communication system
- PND personal navigation device
- PIM Personal Information Manager
- PDA Personal Digital Assistant
- the term “mobile station” is also intended to include devices which communicate with a personal navigation device (PND), such as by short-range wireless, infrared, wire line connection, or other connection - regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device or at the PND.
- PND personal navigation device
- mobile station is intended to include all devices, including wireless communication devices, computers, laptops, etc.
- a server which are capable of communication with a server, such as via the Internet, Wi-Fi, or other network, and regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combination of the above are also considered a "mobile station.”
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Navigation (AREA)
- Telephone Function (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14703478.9A EP2956744A1 (en) | 2013-02-14 | 2014-01-15 | Camera aided motion direction and speed estimation |
JP2015558011A JP2016514251A (en) | 2013-02-14 | 2014-01-15 | Motion direction and speed estimation with camera assistance |
CN201480008543.6A CN104981680A (en) | 2013-02-14 | 2014-01-15 | Camera Aided Motion Direction And Speed Estimation |
KR1020157024499A KR20150120408A (en) | 2013-02-14 | 2014-01-15 | Camera aided motion direction and speed estimation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/767,755 US9330471B2 (en) | 2013-02-14 | 2013-02-14 | Camera aided motion direction and speed estimation |
US13/767,755 | 2013-02-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014126671A1 true WO2014126671A1 (en) | 2014-08-21 |
Family
ID=50070688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2014/011687 WO2014126671A1 (en) | 2013-02-14 | 2014-01-15 | Camera aided motion direction and speed estimation |
Country Status (6)
Country | Link |
---|---|
US (1) | US9330471B2 (en) |
EP (1) | EP2956744A1 (en) |
JP (1) | JP2016514251A (en) |
KR (1) | KR20150120408A (en) |
CN (1) | CN104981680A (en) |
WO (1) | WO2014126671A1 (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8422777B2 (en) * | 2008-10-14 | 2013-04-16 | Joshua Victor Aller | Target and method of detecting, identifying, and determining 3-D pose of the target |
JP5362878B2 (en) * | 2012-05-09 | 2013-12-11 | 株式会社日立国際電気 | Image processing apparatus and image processing method |
RU2638012C2 (en) * | 2012-08-06 | 2017-12-08 | Конинклейке Филипс Н.В. | Reduce of image noise and/or improve of image resolution |
KR102146641B1 (en) * | 2013-04-08 | 2020-08-21 | 스냅 아이엔씨 | Distance estimation using multi-camera device |
US10132635B2 (en) * | 2013-09-17 | 2018-11-20 | Invensense, Inc. | Method and apparatus for misalignment between device and pedestrian using vision |
US10302669B2 (en) * | 2013-11-01 | 2019-05-28 | Invensense, Inc. | Method and apparatus for speed or velocity estimation using optical sensor |
US10670402B2 (en) * | 2013-11-01 | 2020-06-02 | Invensense, Inc. | Systems and methods for optical sensor navigation |
DE102014004071A1 (en) * | 2014-03-20 | 2015-09-24 | Unify Gmbh & Co. Kg | Method, device and system for controlling a conference |
US10735902B1 (en) * | 2014-04-09 | 2020-08-04 | Accuware, Inc. | Method and computer program for taking action based on determined movement path of mobile devices |
EP3134850B1 (en) | 2014-04-22 | 2023-06-14 | Snap-Aid Patents Ltd. | Method for controlling a camera based on processing an image captured by other camera |
DE102015205738A1 (en) * | 2015-03-30 | 2016-10-06 | Carl Zeiss Industrielle Messtechnik Gmbh | Motion measuring system of a machine and method for operating the motion measuring system |
EP3289430B1 (en) | 2015-04-27 | 2019-10-23 | Snap-Aid Patents Ltd. | Estimating and using relative head pose and camera field-of-view |
US9613273B2 (en) * | 2015-05-19 | 2017-04-04 | Toyota Motor Engineering & Manufacturing North America, Inc. | Apparatus and method for object tracking |
US10184797B2 (en) * | 2015-12-18 | 2019-01-22 | Invensense, Inc. | Apparatus and methods for ultrasonic sensor navigation |
WO2017149526A2 (en) | 2016-03-04 | 2017-09-08 | May Patents Ltd. | A method and apparatus for cooperative usage of multiple distance meters |
US10129691B2 (en) * | 2016-10-14 | 2018-11-13 | OneMarket Network LLC | Systems and methods to determine a location of a mobile device |
US10534964B2 (en) * | 2017-01-30 | 2020-01-14 | Blackberry Limited | Persistent feature descriptors for video |
US10663298B2 (en) * | 2017-06-25 | 2020-05-26 | Invensense, Inc. | Method and apparatus for characterizing platform motion |
KR102434574B1 (en) * | 2017-08-14 | 2022-08-22 | 삼성전자주식회사 | Method and apparatus for recognizing a subject existed in an image based on temporal movement or spatial movement of a feature point of the image |
US11175148B2 (en) * | 2017-09-28 | 2021-11-16 | Baidu Usa Llc | Systems and methods to accommodate state transitions in mapping |
CN109949412B (en) * | 2019-03-26 | 2021-03-02 | 腾讯科技(深圳)有限公司 | Three-dimensional object reconstruction method and device |
US11747142B2 (en) | 2019-04-30 | 2023-09-05 | Stmicroelectronics, Inc. | Inertial navigation system capable of dead reckoning in vehicles |
US11199410B2 (en) | 2019-04-30 | 2021-12-14 | Stmicroelectronics, Inc. | Dead reckoning by determining misalignment angle between movement direction and sensor heading direction |
US11249197B2 (en) | 2019-05-03 | 2022-02-15 | Apple Inc. | Image-based techniques for stabilizing positioning estimates |
CN113465609B (en) * | 2020-03-30 | 2024-08-09 | 浙江菜鸟供应链管理有限公司 | Time sequence matching method and device for target object |
US20220392051A1 (en) * | 2021-06-08 | 2022-12-08 | Samsung Electronics Co., Ltd. | Method and apparatus with image analysis |
US11792505B2 (en) * | 2021-07-06 | 2023-10-17 | Qualcomm Incorporated | Enhanced object detection |
CN116328278B (en) * | 2023-03-06 | 2024-08-02 | 浙江大沩人工智能科技有限公司 | Real-time speed measuring method, system, device and medium for multi-user running |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050113995A1 (en) * | 2002-04-10 | 2005-05-26 | Oyaide Andrew O. | Cameras |
US20050234679A1 (en) * | 2004-02-13 | 2005-10-20 | Evolution Robotics, Inc. | Sequential selective integration of sensor data |
US20090024353A1 (en) * | 2007-07-19 | 2009-01-22 | Samsung Electronics Co., Ltd. | Method of measuring pose of mobile robot and method and apparatus for measuring position of mobile robot using the same |
JP4273074B2 (en) * | 2002-07-12 | 2009-06-03 | 株式会社岩根研究所 | Planar development image processing method of plane object video such as road surface, reverse development image conversion processing method, plane development image processing device thereof, and reverse development image conversion processing device |
US20120136573A1 (en) * | 2010-11-25 | 2012-05-31 | Texas Instruments Incorporated | Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems |
US20120296603A1 (en) * | 2011-05-16 | 2012-11-22 | Qualcomm Incorporated | Sensor orientation measurement with respect to pedestrian motion direction |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1031694C (en) * | 1994-08-27 | 1996-05-01 | 冶金工业部建设研究总院 | Smelting special flux for reinforcing bar pressure slag welding |
US6985240B2 (en) * | 2002-12-23 | 2006-01-10 | International Business Machines Corporation | Method and apparatus for retrieving information about an object of interest to an observer |
US7408986B2 (en) * | 2003-06-13 | 2008-08-05 | Microsoft Corporation | Increasing motion smoothness using frame interpolation with motion analysis |
JP4328173B2 (en) * | 2003-10-08 | 2009-09-09 | クラリオン株式会社 | Car navigation system |
EP1767900A4 (en) * | 2004-07-15 | 2010-01-20 | Amosense Co Ltd | Mobile terminal device |
US7519470B2 (en) * | 2006-03-15 | 2009-04-14 | Microsoft Corporation | Location-based caching for mobile devices |
US7895013B2 (en) * | 2008-03-18 | 2011-02-22 | Research In Motion Limited | Estimation of the speed of a mobile device |
US8098894B2 (en) * | 2008-06-20 | 2012-01-17 | Yahoo! Inc. | Mobile imaging device as navigator |
WO2010001968A1 (en) * | 2008-07-02 | 2010-01-07 | 独立行政法人産業技術総合研究所 | Moving body positioning device |
US20100053151A1 (en) * | 2008-09-02 | 2010-03-04 | Samsung Electronics Co., Ltd | In-line mediation for manipulating three-dimensional content on a display device |
KR101500741B1 (en) * | 2008-09-12 | 2015-03-09 | 옵티스 셀룰러 테크놀로지, 엘엘씨 | Mobile terminal having a camera and method for photographing picture thereof |
WO2011053374A1 (en) * | 2009-10-30 | 2011-05-05 | Zoran Corporation | Method and apparatus for image detection with undesired object removal |
US8687070B2 (en) * | 2009-12-22 | 2014-04-01 | Apple Inc. | Image capture device having tilt and/or perspective correction |
US20110158473A1 (en) | 2009-12-29 | 2011-06-30 | Tsung-Ting Sun | Detecting method for detecting motion direction of portable electronic device |
US9160980B2 (en) | 2011-01-11 | 2015-10-13 | Qualcomm Incorporated | Camera-based inertial sensor alignment for PND |
CA2769788C (en) | 2011-03-23 | 2019-08-13 | Trusted Positioning Inc. | Methods of attitude and misalignment estimation for constraint free portable navigation |
US20130006953A1 (en) * | 2011-06-29 | 2013-01-03 | Microsoft Corporation | Spatially organized image collections on mobile devices |
US8810649B2 (en) * | 2011-06-30 | 2014-08-19 | Qualcomm Incorporated | Navigation in buildings with rectangular floor plan |
US8194926B1 (en) | 2011-10-05 | 2012-06-05 | Google Inc. | Motion estimation for mobile device user interaction |
US10008002B2 (en) * | 2012-02-28 | 2018-06-26 | NXP Canada, Inc. | Single-camera distance estimation |
-
2013
- 2013-02-14 US US13/767,755 patent/US9330471B2/en active Active
-
2014
- 2014-01-15 WO PCT/US2014/011687 patent/WO2014126671A1/en active Application Filing
- 2014-01-15 JP JP2015558011A patent/JP2016514251A/en not_active Ceased
- 2014-01-15 CN CN201480008543.6A patent/CN104981680A/en active Pending
- 2014-01-15 KR KR1020157024499A patent/KR20150120408A/en not_active Application Discontinuation
- 2014-01-15 EP EP14703478.9A patent/EP2956744A1/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050113995A1 (en) * | 2002-04-10 | 2005-05-26 | Oyaide Andrew O. | Cameras |
JP4273074B2 (en) * | 2002-07-12 | 2009-06-03 | 株式会社岩根研究所 | Planar development image processing method of plane object video such as road surface, reverse development image conversion processing method, plane development image processing device thereof, and reverse development image conversion processing device |
US20050234679A1 (en) * | 2004-02-13 | 2005-10-20 | Evolution Robotics, Inc. | Sequential selective integration of sensor data |
US20090024353A1 (en) * | 2007-07-19 | 2009-01-22 | Samsung Electronics Co., Ltd. | Method of measuring pose of mobile robot and method and apparatus for measuring position of mobile robot using the same |
US20120136573A1 (en) * | 2010-11-25 | 2012-05-31 | Texas Instruments Incorporated | Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems |
US20120296603A1 (en) * | 2011-05-16 | 2012-11-22 | Qualcomm Incorporated | Sensor orientation measurement with respect to pedestrian motion direction |
Also Published As
Publication number | Publication date |
---|---|
EP2956744A1 (en) | 2015-12-23 |
US9330471B2 (en) | 2016-05-03 |
KR20150120408A (en) | 2015-10-27 |
US20140226864A1 (en) | 2014-08-14 |
JP2016514251A (en) | 2016-05-19 |
CN104981680A (en) | 2015-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9330471B2 (en) | Camera aided motion direction and speed estimation | |
US9177404B2 (en) | Systems and methods of merging multiple maps for computer vision based tracking | |
US10088294B2 (en) | Camera pose estimation device and control method | |
US9400941B2 (en) | Method of matching image features with reference features | |
US9087403B2 (en) | Maintaining continuity of augmentations | |
EP1072014B1 (en) | Face recognition from video images | |
JP2017526082A (en) | Non-transitory computer-readable medium encoded with computer program code for causing a motion estimation method, a moving body, and a processor to execute the motion estimation method | |
US9679384B2 (en) | Method of detecting and describing features from an intensity image | |
WO2012086821A1 (en) | Positioning apparatus and positioning method | |
US10607350B2 (en) | Method of detecting and describing features from an intensity image | |
JP6116765B1 (en) | Object detection apparatus and object detection method | |
KR20150082417A (en) | Method for initializing and solving the local geometry or surface normals of surfels using images in a parallelizable architecture | |
CN116468786A (en) | Semantic SLAM method based on point-line combination and oriented to dynamic environment | |
US20160066150A1 (en) | Dynamic Configuration of a Positioning System | |
JP5973767B2 (en) | Corresponding point search device, program thereof, and camera parameter estimation device | |
WO2021167910A1 (en) | A method for generating a dataset, a method for generating a neural network, and a method for constructing a model of a scene | |
KR20220062709A (en) | System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor | |
WO2023130842A1 (en) | Camera pose determining method and apparatus | |
EP1580684A1 (en) | Face recognition from video images | |
KR20210028538A (en) | Accelerated Vision-based Pose Estimation utilizing IMU sensor data for Inside-out Tracking in Virtual Reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14703478 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2014703478 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2015558011 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20157024499 Country of ref document: KR Kind code of ref document: A |