WO2011145239A1 - 位置推定装置、位置推定方法及びプログラム - Google Patents
位置推定装置、位置推定方法及びプログラム Download PDFInfo
- Publication number
- WO2011145239A1 WO2011145239A1 PCT/JP2011/000749 JP2011000749W WO2011145239A1 WO 2011145239 A1 WO2011145239 A1 WO 2011145239A1 JP 2011000749 W JP2011000749 W JP 2011000749W WO 2011145239 A1 WO2011145239 A1 WO 2011145239A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- matching
- feature amount
- feature
- continuous
- feature quantity
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- the present invention relates to a position estimation apparatus, a position estimation method, and a program that can be suitably used for a robot apparatus, and more particularly to a position estimation apparatus, a position estimation method, and a program that perform position estimation using local feature amounts.
- a position detection device described in Patent Document 1.
- a luminance image acquisition unit that acquires a luminance image of a front visual field of a moving body and a distance image that has the same visual field as the luminance image acquisition unit and at the same time the luminance image acquisition unit acquires the luminance image
- a distance image acquisition means for acquiring a feature point, a feature point extraction means for extracting a feature point from at least two consecutive luminance images, and a displacement amount of the feature point extracted by the feature point extraction means between two frames.
- Reference feature point selection means for selecting a reference feature point for calculating based on the distance image and calculating the self position from the displacement amount is provided.
- the place where the image is currently taken is a place where the robot has visited before or a place that the robot does not know at all.
- a place that is completely unknown may be associated with a certain place.
- the ability to discriminate whether the current position is a place registered in the database or a new place is very important. If it can be recognized that the photographed place is a new place, the DB can be expanded, that is, the map can be learned. Development of such a position estimation device that is suitably mounted on a mobile body, particularly a robot device, is desired.
- the memory capacity increases due to learning, since the robot has only limited memory resources, it is necessary to suppress the increase in memory capacity. Also, in order to recognize the position in real time, it is necessary to improve the calculation speed.
- the present invention has been made to solve such a problem, and a position estimation device capable of recognizing whether a current position is an already registered place or an unregistered place,
- An object is to provide a position estimation method and a program.
- the position estimation apparatus refers to a feature amount extracting unit that extracts an invariant feature amount from an input image, and a database in which each registered location and the invariant feature amount are stored in association with each other.
- a position recognition unit that recognizes that the input image is a registration place when the input image is equal to or greater than the threshold value.
- the feature quantity extraction unit extracts a local feature quantity extraction unit that extracts a local feature quantity from each of the input images that are continuously captured images, and the local feature quantity extracted by the local feature quantity extraction unit.
- a feature quantity matching means for matching between successive input images
- a continuous feature quantity selection means for selecting a feature quantity matched between successive images by the feature quantity matching means as a continuous feature quantity
- the continuous feature Invariant feature quantity calculation means for obtaining an invariant feature quantity based on the quantity
- the continuous feature quantity selection means makes the number of continuous images variable according to the number of feature quantities that have been matched. It is.
- the current position is identified by extracting an invariant feature amount from an input image consisting of continuous images taken continuously and performing matching using this.
- the number of invariant features can be arbitrarily changed by making the number of consecutive images variable according to the number of matched features.
- the number of invariant features can be set to an appropriate number in consideration of the usage, calculation speed, and the like.
- the matching means has a common dictionary in which each feature quantity is recorded in association with an index, and can perform matching by converting the local feature quantity of each input image into an index with reference to the common dictionary. Since the feature amount is managed by one common dictionary in association with the index, the memory capacity can be greatly reduced.
- the matching means can calculate a matching score by a product of the number of matching with the feature amount registered in the common dictionary and the number of matching with the feature amount included in the matching target image. Since the matching score can be obtained by calculation, the calculation speed can be improved.
- the similarity calculation means calculates a first estimated value by applying a weight to the matching score of the selected registered place and the neighboring registered places, and the certifying means calculates the first estimated value.
- the registration location can be recognized as the similarity, and the estimation rate can be improved not only by matching but also by estimating the location in consideration of the nearby registration location.
- the similarity calculating means calculates a second estimated value obtained by normalizing the first estimated value, and the certifying means certifies a registered place using the second estimated value as the similarity. By normalizing, it is possible to further eliminate false recognition and improve the recognition rate.
- the feature quantity of SIFT Scale Invariant Feature Transformation
- SURF Speed Up Robustness Feature
- other local feature quantities that are robust to scale, rotation fluctuation, noise, and the like can be used.
- the position estimation method refers to a feature amount extraction step for extracting an invariant feature amount from an input image, and a database in which each registered location and the invariant feature amount are stored in association with each other, and the input image and the registered location are referred to.
- the feature amount extracting step includes a local feature amount extracting step for extracting a local feature amount from each of the input images formed of continuous images taken continuously, and a local feature extracted in the local feature amount extracting step.
- a program according to the present invention causes a computer to execute the position estimation process described above.
- a position estimation device capable of recognizing whether a current position is a registered place or an unregistered place.
- the present invention is applied to a position estimation apparatus for estimating a position, which is mounted on a mobile robot apparatus or the like.
- FIG. 1 is a block diagram showing a position estimation apparatus according to an embodiment of the present invention.
- the position estimation apparatus 10 includes a feature quantity extraction unit 11 that extracts an invariant feature quantity from an input image composed of continuous images that are continuously shot, a common dictionary 12, a matching unit 13, a similarity calculation unit 14, and a position recognition unit 15.
- the feature quantity extraction unit 11 includes a local feature quantity extraction unit 21, a feature quantity matching unit 22, a continuous feature quantity selection unit 23, and an invariant feature quantity extraction unit 24.
- the local feature amount extraction unit 21 extracts a local feature amount from each input image.
- the feature amount matching unit 22 performs matching between successive input images with respect to the local feature amount extracted by the local feature amount extraction unit 21.
- the continuous feature quantity selection unit 23 selects a feature quantity that has been matched between successive images by the feature quantity matching unit as a continuous feature quantity.
- the invariant feature amount extraction unit 24 obtains an invariant feature amount based on the continuous feature amount.
- the continuous feature quantity selection unit 23 changes the number of continuous images according to the number of feature quantities that have been matched.
- the matching unit 13 refers to a database in which each registered location is associated with an invariant feature and stores the matching location between the input image and the registered location.
- the similarity calculation unit 14 calculates the similarity including a registration location in the vicinity of the selected registration location.
- the position recognition unit 15 determines that the input image is a registration place when the similarity is equal to or greater than a predetermined threshold.
- the invariant feature amount extracted by the invariant feature amount extraction unit 24 is referred to as a feature amount PIRF (Position-Invariant Robust Features).
- the feature quantity extraction unit 20 extracts the feature quantity PIRF as a (local) feature quantity that is not easily affected by changes in the shooting position.
- the inventor of the present application has a large difference in appearance (change in feature amount) due to a change in shooting position or shooting time zone for a nearby object. Since the change is small for a distant object (the feature amount of the landmark does not change so much), a method for extracting the feature amount PIRF has been found.
- the feature amount extraction unit 20 simply performs local feature matching between consecutive images, selects features that are continuously matched, and selects the matching features in the selected features. Among the local feature values taken, the local feature value of the current image is extracted and described as the feature value PIRF. The number of consecutive images at this time is called the window size. If the window size is increased, the local feature quantity that can be matched decreases, and if the window size is reduced, the local feature quantity that can be matched increases. In the present embodiment, by making the window size variable, a desired number of feature values PIRF are obtained.
- FIG. 2 is a diagram for explaining the relationship between the window size and the feature amount PIRF.
- the current image L t has local feature amounts of K, B, C, J,. If the local feature amount of the image L t ⁇ 1 immediately before the current image L t is B, K, C, I..., They can be matched by the local feature amounts B, K, C. ing. Further, when the local feature amount of L t-2 that is the previous image is A, B, D, C,..., Similarly, matching can be obtained between L t to L t-2.
- the local feature amounts are B and C.
- the window size when the window size is expanded to L t ⁇ 1 , L t ⁇ 2 , and L t ⁇ 3, only B is a local feature quantity that can be matched.
- the feature amount PIRF is only B. If the window size is increased, the number of feature amounts PIRF that can be matched between all images decreases, and if the window size is decreased, the number of feature amounts PIRF increases. Here, if the number of feature amounts PIRF becomes 0, the window size is reduced. On the other hand, if the feature value PIRF is larger than the determined maximum feature value number, the window size may be increased to reduce the number of feature values PIRF. Local feature quantities that are matched between images are added to the connection list. This saves the trouble of matching again when processing the next image.
- the feature quantity PIRF can be an average of local feature quantities of each image, but in the present embodiment, the local feature quantity of the current image is adopted as the feature quantity PIRF. As a result, the local feature amount matching the current image can be set as the feature amount PIRF. Note that the average of all local feature values may be used as the feature value PIRF depending on the application.
- FIG. 3 is a flowchart showing a position estimation method according to the present embodiment.
- the feature extraction unit 11 extracts the invariant features PIRF the current position L t (Step S1).
- the local feature amount extraction unit 21 receives continuous images taken as input images.
- a continuous image required by PIRF is a video set that is a certain image set and is continuously captured every second, such as two frames every second, for example, two frames every second. That is, the images captured from the video are generally continuous, and the continuous images in the PIRF must use video images.
- the image acquisition rate is set according to the speed of the camera. For example, if the camera is mounted in a car, the camera speed is about 1000 m / min per minute and the continuous image captured from the video is approximately 50 to 100 frames / sec.
- the local feature quantity extraction unit 21 extracts a local feature quantity using an existing local feature quantity extraction method.
- the local feature quantity extraction unit 21 can use, for example, a feature quantity of SIFT (Scale Invariant Feature Transformation) or SURF (Speed Up Robustness Features).
- SIFT Scale Invariant Feature Transformation
- SURF Speed Up Robustness Features
- the present invention is not limited to these SIFTs and SURFs, and other local feature quantities can be used.
- the performance of the existing feature amounts is inherited as it is, and it becomes possible to extract and describe as features that are robust against changes in illumination.
- SIFT extracts 2000 to 3000 or more feature quantities as local feature quantities.
- the SURF extracts 200 to 300 local feature amounts, the calculation amount is small.
- using this SURF about 100 PIRFs are extracted in one place.
- the feature amount matching unit 22 obtains a local feature amount matching between successive images using the image acquired at the current position and the image acquired immediately before. For example, if the matching score is greater than or equal to a predetermined threshold, both local feature quantities are considered to be matched.
- the continuous feature selection unit 23 determines the window size.
- the window size is determined so that the number of invariant feature values PIRF is about 100.
- the invariant feature quantity extraction unit 24 extracts the local feature quantity at the current position as the invariant feature quantity PIRF, not the average of the local feature quantities between successive images.
- FIG. 4 is a diagram for explaining a method of matching with the common dictionary 12.
- the PIRFs at the current position Lt are A, M, R, C, and Q.
- the alphabet shall indicate PIRF.
- each PIRF is stored in association with an index. That is, index 1 is stored as L, and index 2 is stored as M or the like.
- Matching unit 13 detects the coincidence between PIRF at the current position L t and PIRF common dictionary 12, if there is a match, replacing the index. If they do not match, the index is set to 0, for example.
- s m represents a matching score between the model m and the current position L t .
- num_appear indicates the number of PIRFs matched in the common dictionary 12.
- num_appear 3.
- the similarity calculation unit 14 obtains a second state score (first estimated value) b m in consideration of the adjacent position (step S4).
- a second state score (first estimated value) b m in consideration of the adjacent position (step S4).
- the matching score of these adjacent positions are expected to substantially the same or slightly lower level as s m. That is, for example, even if s m is a high score, if s m ⁇ 1 or s m + 1 is 0, the value of the matching score s m is strange, that is, the position cannot be estimated.
- the second state score b m weighted by the Gaussian function p t (m, i) is obtained by the following equation (2).
- w indicates the number of adjacent positions to be considered. For example, if the frame rate is constant, the value of w can be set to 1, for example, if the speed is fast, and the value of w can be set to 2, if the speed is slow.
- the certification rate is further improved by normalizing the second state score b m .
- the normalized score (second estimated value) b_norm m can be obtained from the following equation (3) (step S5).
- n is a value corresponding to the moving speed of the position estimation device, and can be the maximum number of extractions obtained by PIRF extraction.
- the similarity calculation unit 14 obtains the normalized score b_norm m, and if the value is larger than a predetermined threshold value, the position recognition unit 15 recognizes the current position as the model m, that is, as a known place ( Step S6, 7). For example, when the model (place) m matches the current position, the feature value PIRF of the place m is updated by adding the feature value PIRF that was not included in the original place m to the place m.
- the feature amount PIRF of the place m does not increase the memory capacity if, for example, the first-in first-out method is adopted.
- the position recognition unit 15 recognizes the current position as a new place (step S8), and registers the PIRF extracted at the current position in the common dictionary 12. .
- the common dictionary 12 is used. That is, the memory capacity can be greatly reduced by having a common dictionary in all places without having a dictionary for each place. In the common dictionary 12 as well, an increase in memory capacity can be suppressed by using the FIFO.
- City Center Dataset is a data set collected by Cummins and Newman (M.Cummins, and P. Newman, "Highly Scalable Appearance-Only SLAM-FAB-MAP 2.0", Proc. Robotics: Sciences and Systems (RSS ), 2009). It consists of 1237 locations and 2474 data (one on the left and one on the right) every 1.5 meters by a stereo camera. Table 1 below shows the recognition rate, and FIG. 5 is a graph showing the recognition result. It can be seen that the recognition rate of this embodiment is much higher than that of FAB-MAP. In Table 1, Recall is a rate at which the system gives answers (a rate recognized as a known place), and Precision indicates the correct answer rate. Further, Total ⁇ ⁇ Time indicates the time required for recognition, and it can be seen that all the results show surprising numerical values.
- Comparative Example 1 is FAB-MAP (M. Cummins, and P. Newman, “Highly Scalable Appearance-Only SLAM-FAB-MAP 2.0”, Proc. Robotics: Sciences and Systems (RSS), 2009), Comparative Example 2 is Fast and incremental BOWs (A. Angeli, D. Fillat, S. Soncieux, and JA Mayer, "Fast Ubcrenabtak Nethid for Loop-Closure Detection Using Bags of Visual Wird," IEEE Trans. Rovotics, 2008, 24 (5 ), pp. 1027-1037). 2. Lip6Indoor dataset
- the following example uses an indoor data set. It consists of 318 images collected every second. Table 2 shows the recognition results. The example shows that the recognition of one sheet is wrong, but the result is superior to the other.
- the following example uses data taken from a space with the most difficult movement. 692 images were taken with an omnidirectional camera at 692 locations at a size of 270 ⁇ 480 at a rate of 2 frames per second. Table 3 shows the recognition rate, and FIG. 6 is a graph showing the recognition result.
- Comparative Example 2 the inventor sent the data set to France and sent back the experimental results of Comparative Example 2.
- the server can analyze the image and return where it is located, and what kind of facilities and shops are around.
- the search moving image sent from the user can be used as data for updating the dictionary and map at the same time. For this reason, the dictionary and the map can always be updated. In conventional car navigation systems and the like, it is basically impossible to update map data, or it takes considerable time and money to update.
- each base station Since there are base stations that share and manage service areas in the mobile phone network, each base station should have a map of the area in charge and update it. In other words, a huge dictionary is not necessary, and memory and calculation speed can be greatly saved. In the future, wearable vision (camera) like glasses is likely to appear, and such glasses can always identify their own position and present useful information.
- camera wearable vision
- Non-transitory computer readable media include various types of tangible storage media (tangible storage medium).
- non-transitory computer-readable media examples include magnetic recording media (eg flexible disks, magnetic tapes, hard disk drives), magneto-optical recording media (eg magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, CD-R / W, semiconductor memory (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable ROM), flash ROM, RAM (random access memory)) are included.
- the program may also be supplied to the computer by various types of temporary computer-readable media. Examples of transitory computer readable media include electrical signals, optical signals, and electromagnetic waves.
- the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
- the present invention can be suitably applied to a position estimation apparatus, a position estimation method, and a program that can be used in a robot apparatus or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
本発明は、このような問題点を解決するためになされたものであり、現在の位置が既に登録済みの場所であるか、未登録の場所であるかを認識することができる位置推定装置、位置推定方法及びプログラムを提供することを目的とする。
図1は、本発明の実施の形態にかかる位置推定装置を示すブロック図である。位置推定装置10は、連続して撮影された連続画像からなる入力画像から不変特徴量を抽出する特徴量抽出部11、共通辞書12、マッチング部13、類似度算出部14及び位置認定部15を有する。また、特徴量抽出部11は、局所特徴量抽出部21、特徴量マッチング部22、連続特徴量選択部23、及び不変特徴量抽出部24を有する。
sm=nm×num_appear・・・(1)
ここで、smは、モデルmと現在位置Ltとのマッチングスコアを示す。num_appearは、共通辞書12で一致したPIRFの数を示す。図4の例では、M、R、Qの3つのPIRFが一致しているので、num_appear=3である。nmは、モデルmとマッチングしたPIRFの数を示す。例えば、nm=2であれば、sm=2×3=6となる。
1.City Centur Dataset
2.Lip6Indoor dataset
上述したように、本発明においては、画像から自己位置同定ができ、辞書のオンライン更新が可能である。そこで、例えば、携帯の動画撮影機能と組み合わせると、以下のような応用が可能である。
11 特徴量抽出部
12 共通辞書
13 マッチング部
14 類似度算出部
15 位置認定部
21 局所特徴量抽出部
22 特徴量マッチング部
23 連続特徴量選択部
24 不変特徴量抽出部
Claims (13)
- 入力画像から不変特徴量を抽出する特徴量抽出手段と、
各登録場所と不変特徴量が対応づけられて保存されているデータベースを参照し、入力画像と登録場所とのマッチングを求めるマッチング手段と、
マッチングが所定の閾値以上である場合に、選ばれた登録場所の近傍の登録場所を含めて類似度を算出する類似度算出手段と、
前記類似度が所定の閾値以上である場合に、当該入力画像が登録場所であると認定する位置認定手段とを有し、
前記特徴量抽出手段は、
連続して撮影された連続画像からなる入力画像それぞれから、局所特徴量を抽出する局所特徴量抽出手段と、
前記局所特徴量抽出手段により抽出された局所特徴量について、連続する入力画像間でマッチングをとる特徴量マッチング手段と、
前記特徴量マッチング手段により連続する画像間でマッチングが取れた特徴量を連続特徴量として選択する連続特徴量選択手段と、
前記連続特徴量に基づき不変特徴量を求める不変特徴量算出手段とを有し、
前記連続特徴量選択手段は、前記マッチングが取れた特徴量の数に応じて、前記連続する画像の枚数を可変とする、位置推定装置。 - 前記マッチング手段は、各特徴量をインデックスに対応づけて記録した共通辞書を有し、前記共通辞書を参照して各入力画像の局所特徴量をインデックスに変換し、マッチングを行う、請求項1記載の位置推定装置。
- 前記マッチング手段は、前記共通辞書に登録された特徴量とマッチングした数と、マッチング対象画像に含まれる特徴量とマッチングした数との積により、マッチングスコアを算出する、請求項2記載の位置推定装置。
- 前記類似度算出手段は、前記選ばれた登録場所及び近傍の登録場所のマッチングスコアに重みをかけて第1の推定値を算出し、
前記認定手段は、前記第1の推定値を前記類似度として登録場所の認定を行う、請求項1乃至3のいずれか1項記載の位置推定装置。 - 前記類似度算出手段は、第1の推定値を正規化した第2の推定値を算出し、
前記認定手段は、前記第2の推定値を前記類似度として登録場所の認定を行う、請求項1乃至3のいずれか1項記載の位置推定装置。 - 前記局所特徴量は、SIFT(Scale Invariant Feature Transformation)及び/又はSURF(Speed Up Robustness Features)の特徴量である、請求項1乃至3のいずれか1項記載の位置推定装置。
- 入力画像から不変特徴量を抽出する特徴量抽出工程と、
各登録場所と不変特徴量が対応づけられて保存されているデータベースを参照し、入力画像と登録場所とのマッチングを求めるマッチング工程と、
マッチングが所定の閾値以上である場合に、選ばれた登録場所の近傍の登録場所を含めて類似度を算出する類似度算出工程と、
前記類似度が所定の閾値以上である場合に、当該入力画像が登録場所であると認定する位置認定工程とを有し、
前記特徴量抽出工程は、
連続して撮影された連続画像からなる入力画像それぞれから、局所特徴量を抽出する局所特徴量抽出工程と、
前記局所特徴量抽出工程にて抽出された局所特徴量について、連続する入力画像間でマッチングをとる特徴量マッチング工程と、
前記特徴量マッチング工程にて連続する画像間でマッチングが取れた特徴量を連続特徴量として選択する連続特徴量選択工程と、
前記連続特徴量に基づき不変特徴量を求める不変特徴量算出工程とを有し、
前記連続特徴量選択工程では、前記マッチングが取れた特徴量の数に応じて、前記連続する画像の枚数を可変とする、位置推定方法。 - 前記マッチング工程では、各特徴量をインデックスに対応づけて記録した共通辞書を有し、前記共通辞書を参照して各入力画像の局所特徴量をインデックスに変換し、マッチングを行う、請求項7記載の位置推定方法。
- 前記マッチング工程では、前記共通辞書に登録された特徴量とマッチングした数と、マッチング対象画像に含まれる特徴量とマッチングした数との積により、マッチングスコアを算出する、請求項8記載の位置推定方法。
- 前記類似度算出工程では、前記選ばれた登録場所及び近傍の登録場所のマッチングスコアに重みをかけて第1の推定値を算出し、
前記認定工程では、前記第1の推定値を前記類似度として登録場所の認定を行う、請求項7乃至9のいずれか1項記載の位置推定方法。 - 前記類似度算出工程では、第1の推定値を正規化した第2の推定値を算出し、
前記認定工程では、前記第2の推定値を前記類似度として登録場所の認定を行う、請求項7乃至9のいずれか1項記載の位置推定方法。 - 前記局所特徴量は、SIFT(Scale Invariant Feature Transformation)及び/又はSURF(Speed Up Robustness Features)の特徴量である、請求項7乃至9のいずれか1項記載の位置推定方法。
- 所定の動作をコンピュータに実行させるためのプログラムであって、
入力画像から不変特徴量を抽出する特徴量抽出工程と、
各登録場所と不変特徴量が対応づけられて保存されているデータベースを参照し、入力画像と登録場所とのマッチングを求めるマッチング工程と、
マッチングが所定の閾値以上である場合に、選ばれた登録場所の近傍の登録場所を含めて類似度を算出する類似度算出工程と、
前記類似度が所定の閾値以上である場合に、当該入力画像が登録場所であると認定する位置認定工程とを有し、
前記特徴量抽出工程は、
連続して撮影された連続画像からなる入力画像それぞれから、局所特徴量を抽出する局所特徴量抽出工程と、
前記局所特徴量抽出工程にて抽出された局所特徴量について、連続する入力画像間でマッチングをとる特徴量マッチング工程と、
前記特徴量マッチング工程にて連続する画像間でマッチングが取れた特徴量を連続特徴量として選択する連続特徴量選択工程と、
前記連続特徴量に基づき不変特徴量を求める不変特徴量算出工程とを有し、
前記連続特徴量選択工程では、前記マッチングが取れた特徴量の数に応じて、前記連続する画像の枚数を可変とするプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012515708A JPWO2011145239A1 (ja) | 2010-05-19 | 2011-02-10 | 位置推定装置、位置推定方法及びプログラム |
US13/698,572 US9098744B2 (en) | 2010-05-19 | 2011-02-10 | Position estimation device, position estimation method, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010115307 | 2010-05-19 | ||
JP2010-115307 | 2010-05-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011145239A1 true WO2011145239A1 (ja) | 2011-11-24 |
Family
ID=44991362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/000749 WO2011145239A1 (ja) | 2010-05-19 | 2011-02-10 | 位置推定装置、位置推定方法及びプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9098744B2 (ja) |
JP (1) | JPWO2011145239A1 (ja) |
WO (1) | WO2011145239A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011215716A (ja) * | 2010-03-31 | 2011-10-27 | Toyota Motor Corp | 位置推定装置、位置推定方法及びプログラム |
JP2014507022A (ja) * | 2010-12-08 | 2014-03-20 | マイクロソフト コーポレーション | 位置ベースの画像編成 |
WO2014073204A1 (ja) * | 2012-11-06 | 2014-05-15 | 国立大学法人東京工業大学 | 特徴量抽出装置及び場所推定装置 |
JP2016142624A (ja) * | 2015-02-02 | 2016-08-08 | 株式会社日立ソリューションズ | 位置推定装置、位置推定システム、位置推定方法及び位置推定プログラム |
JP2017224280A (ja) * | 2016-05-09 | 2017-12-21 | ツーアンツ インク.TwoAntz Inc. | 視覚的測位によるナビゲーション装置およびその方法 |
US10366305B2 (en) | 2016-02-24 | 2019-07-30 | Soinn Inc. | Feature value extraction method and feature value extraction apparatus |
CN114982460A (zh) * | 2022-06-13 | 2022-09-02 | 深圳拓邦股份有限公司 | 一种割草机位置修正方法和装置 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9545582B2 (en) * | 2013-08-23 | 2017-01-17 | Evollve, Inc. | Robotic activity system using color patterns |
CN107515006A (zh) * | 2016-06-15 | 2017-12-26 | 华为终端(东莞)有限公司 | 一种地图更新方法和车载终端 |
CN109242899B (zh) * | 2018-09-03 | 2022-04-19 | 北京维盛泰科科技有限公司 | 一种基于在线视觉词典的实时定位与地图构建方法 |
EP3910593A4 (en) * | 2019-01-09 | 2022-01-19 | Fuji Corporation | IMAGE PROCESSING DEVICE, WORKING ROBOT, SUBSTRATE INSPECTION DEVICE AND SAMPLE INSPECTION DEVICE |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002048513A (ja) * | 2000-05-26 | 2002-02-15 | Honda Motor Co Ltd | 位置検出装置、位置検出方法、及び位置検出プログラム |
JP2008304268A (ja) * | 2007-06-06 | 2008-12-18 | Sony Corp | 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム |
JP2010238008A (ja) * | 2009-03-31 | 2010-10-21 | Fujitsu Ltd | 映像特徴抽出装置、及びプログラム |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6088469A (en) * | 1996-09-30 | 2000-07-11 | Sony Corporation | Identifying apparatus and method, position detecting apparatus and method, robot apparatus and color extracting apparatus |
AU767561B2 (en) * | 2001-04-18 | 2003-11-13 | Samsung Kwangju Electronics Co., Ltd. | Robot cleaner, system employing the same and method for reconnecting to external recharging device |
US7623685B2 (en) * | 2004-08-20 | 2009-11-24 | The Regents Of The University Of Colorado | Biometric signatures and identification through the use of projective invariants |
KR100791381B1 (ko) * | 2006-06-01 | 2008-01-07 | 삼성전자주식회사 | 이동 로봇의 원격 조종을 위한 충돌방지 시스템, 장치 및방법 |
EP1870211B1 (en) * | 2006-06-22 | 2019-02-27 | Honda Research Institute Europe GmbH | Method for controlling a robot by assessing the fitness of a plurality of simulated behaviours |
US8054289B2 (en) * | 2006-12-01 | 2011-11-08 | Mimic Technologies, Inc. | Methods, apparatus, and article for force feedback based on tension control and tracking through cables |
US8031947B2 (en) * | 2007-04-03 | 2011-10-04 | Jacobsen Kenneth P | Method and system for rapid matching of video streams |
US8639262B2 (en) * | 2008-10-31 | 2014-01-28 | Qualcomm Incorporated | Method and apparatus to confirm mobile equipment has remained relatively stationary using one or more pilot signal |
US8438507B2 (en) * | 2008-11-20 | 2013-05-07 | Nikon Corporation | Systems and methods for adjusting a lithographic scanner |
US8768071B2 (en) * | 2011-08-02 | 2014-07-01 | Toyota Motor Engineering & Manufacturing North America, Inc. | Object category recognition methods and robots utilizing the same |
-
2011
- 2011-02-10 US US13/698,572 patent/US9098744B2/en not_active Expired - Fee Related
- 2011-02-10 JP JP2012515708A patent/JPWO2011145239A1/ja active Pending
- 2011-02-10 WO PCT/JP2011/000749 patent/WO2011145239A1/ja active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002048513A (ja) * | 2000-05-26 | 2002-02-15 | Honda Motor Co Ltd | 位置検出装置、位置検出方法、及び位置検出プログラム |
JP2008304268A (ja) * | 2007-06-06 | 2008-12-18 | Sony Corp | 情報処理装置、および情報処理方法、並びにコンピュータ・プログラム |
JP2010238008A (ja) * | 2009-03-31 | 2010-10-21 | Fujitsu Ltd | 映像特徴抽出装置、及びプログラム |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011215716A (ja) * | 2010-03-31 | 2011-10-27 | Toyota Motor Corp | 位置推定装置、位置推定方法及びプログラム |
JP2014507022A (ja) * | 2010-12-08 | 2014-03-20 | マイクロソフト コーポレーション | 位置ベースの画像編成 |
WO2014073204A1 (ja) * | 2012-11-06 | 2014-05-15 | 国立大学法人東京工業大学 | 特徴量抽出装置及び場所推定装置 |
US9396396B2 (en) | 2012-11-06 | 2016-07-19 | Soinn Holdings Llc | Feature value extraction apparatus and place estimation apparatus |
JPWO2014073204A1 (ja) * | 2012-11-06 | 2016-09-08 | 国立大学法人東京工業大学 | 特徴量抽出装置及び場所推定装置 |
JP2016142624A (ja) * | 2015-02-02 | 2016-08-08 | 株式会社日立ソリューションズ | 位置推定装置、位置推定システム、位置推定方法及び位置推定プログラム |
US10366305B2 (en) | 2016-02-24 | 2019-07-30 | Soinn Inc. | Feature value extraction method and feature value extraction apparatus |
JP2017224280A (ja) * | 2016-05-09 | 2017-12-21 | ツーアンツ インク.TwoAntz Inc. | 視覚的測位によるナビゲーション装置およびその方法 |
CN114982460A (zh) * | 2022-06-13 | 2022-09-02 | 深圳拓邦股份有限公司 | 一种割草机位置修正方法和装置 |
CN114982460B (zh) * | 2022-06-13 | 2024-04-30 | 深圳拓邦股份有限公司 | 一种割草机位置修正方法和装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2011145239A1 (ja) | 2013-07-22 |
US20130108172A1 (en) | 2013-05-02 |
US9098744B2 (en) | 2015-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011145239A1 (ja) | 位置推定装置、位置推定方法及びプログラム | |
US10198823B1 (en) | Segmentation of object image data from background image data | |
WO2021164469A1 (zh) | 目标对象的检测方法、装置、设备和存储介质 | |
JP2020520512A (ja) | 車両外観特徴識別及び車両検索方法、装置、記憶媒体、電子デバイス | |
CN108257178A (zh) | 用于定位目标人体的位置的方法和装置 | |
US9396396B2 (en) | Feature value extraction apparatus and place estimation apparatus | |
CN111241932A (zh) | 汽车展厅客流检测与分析系统、方法及存储介质 | |
JP2012083855A (ja) | 物体認識装置及び物体認識方法 | |
Piciarelli | Visual indoor localization in known environments | |
CN111598067B (zh) | 视频中重识别训练方法、重识别方法及存储装置 | |
CN113935358A (zh) | 一种行人追踪方法、设备和存储介质 | |
JP5557189B2 (ja) | 位置推定装置、位置推定方法及びプログラム | |
JP2021515321A (ja) | メディア処理方法、その関連装置及びコンピュータプログラム | |
CN109508657B (zh) | 人群聚集分析方法、系统、计算机可读存储介质及设备 | |
KR102467010B1 (ko) | 이미지 복원에 기반한 상품 검색 방법 및 시스템 | |
CN116824641A (zh) | 姿态分类方法、装置、设备和计算机存储介质 | |
CN115035596B (zh) | 行为检测的方法及装置、电子设备和存储介质 | |
CN111522988B (zh) | 图像定位模型获取方法及相关装置 | |
RU2708504C1 (ru) | Способ обучения системы распознавания товаров на изображениях | |
CN114359646A (zh) | 一种视频分析方法、装置、系统、电子设备和介质 | |
Pham et al. | A robust model for person re-identification in multimodal person localization | |
Jiao et al. | An indoor positioning method based on wireless signal and image | |
CN113807369B (zh) | 目标重识别方法及装置、电子设备和存储介质 | |
CN111967314B (zh) | 行人重识别方法、装置、电子设备及存储介质 | |
KR102401626B1 (ko) | 이미지 기반 상품검색 방법 및 그 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11783178 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012515708 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13698572 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11783178 Country of ref document: EP Kind code of ref document: A1 |