WO2018235219A1 - Self-location estimation method, self-location estimation device, and self-location estimation program - Google Patents

Self-location estimation method, self-location estimation device, and self-location estimation program Download PDF

Info

Publication number
WO2018235219A1
WO2018235219A1 PCT/JP2017/022968 JP2017022968W WO2018235219A1 WO 2018235219 A1 WO2018235219 A1 WO 2018235219A1 JP 2017022968 W JP2017022968 W JP 2017022968W WO 2018235219 A1 WO2018235219 A1 WO 2018235219A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
image
feature point
position estimation
unit
Prior art date
Application number
PCT/JP2017/022968
Other languages
French (fr)
Japanese (ja)
Inventor
充 仙洞田
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2019524792A priority Critical patent/JPWO2018235219A1/en
Priority to PCT/JP2017/022968 priority patent/WO2018235219A1/en
Publication of WO2018235219A1 publication Critical patent/WO2018235219A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Definitions

  • the present invention relates to a self position estimation method, a self position estimation device and a self position estimation program, and more particularly to a self position estimation method, a self position estimation device and a self position estimation program in which an image captured by a camera is used.
  • Measurement of the position and orientation of an imaging device based on image information is performed by aligning the real space with a virtual object in Augmented Reality (AR: Augmented Reality) or Mixed Reality (MR); It is used for position estimation, 3D modeling of objects and scenes, etc.
  • AR Augmented Reality
  • MR Mixed Reality
  • Non-Patent Document 1 the information on feature points in a scene is held as a three-dimensional map, and the position and orientation of the imaging device are determined based on the correspondence between feature points detected in an image and feature points in the three-dimensional map. The method of estimating is described.
  • Non-Patent Document 1 uses the position and orientation of the imaging device measured in the previous frame to measure the position and orientation of the imaging device in the current frame.
  • Non-Patent Document 1 predicts the position and orientation of the imaging device in the current frame based on the position and orientation of the imaging device measured in the previous frame and the motion model.
  • the method described in Non-Patent Document 1 uses the predicted position and orientation of the imaging device to associate feature points in a three-dimensional map with feature points in an image.
  • Non-Patent Document 1 uses the predicted position and orientation of the imaging device as an initial value of calculation processing that is repeatedly executed to obtain the position and orientation of the imaging device in the current frame. .
  • Non-Patent Document 1 If the movement of the imaging device is large between frames or the number of feature points detected in an image is small, measurement of the position and orientation of the imaging device by the method described in Non-Patent Document 1 may fail. .
  • an image suitable for self-position estimation As a feature point used for repetitively executed calculation processing.
  • Patent Document 1 is special in that when a robot encounters an undesirable event (e.g. an accidental collision), data moments before danger are identified in the learning algorithm.
  • An undesirable event e.g. an accidental collision
  • a mobile robotic system is described in which a "danger" tag is inserted into the data set.
  • Patent Document 2 describes a self-position re-estimation method in which a result of applying a posture detector to a key frame is compared with a current view in consideration of a case where self-position estimation fails.
  • Patent Document 3 when a set of feature points extracted from a detected scene is recognized in any one of the experiences previously detected and stored, the recognized experience is used. A sensor position estimation method is described to assess the current position of the vehicle.
  • FIG. 6 is a block diagram showing a configuration example of a general self-position estimation system.
  • the self-position estimation system 90 includes a camera 200 and a self-position estimation apparatus 900.
  • the self-position estimation apparatus 900 includes a signal processing unit 190. Also, the camera 200 is communicably connected to the self-position estimation apparatus 900. The connection between the camera 200 and the self-position estimation apparatus 900 may be a wired connection or a wireless connection. Video data captured by the camera 200 is transmitted to the self-position estimation apparatus 900.
  • FIG. 7 is a block diagram showing another configuration example of a general self-position estimation system.
  • the self-position estimation system 91 shown in FIG. 7 is used when it is not necessary to process the acquired video data immediately, such as when only the trajectory of the moving object is obtained.
  • the self-position estimation apparatus 900 is communicably connected to the recording medium 210 instead of the camera 200.
  • the video data stored in the recording medium 210 is transmitted to the self-position estimation apparatus 900 instead of the video data captured by the camera 200.
  • video data captured by the camera 200 or video data stored in the recording medium 210 is transmitted to the self-position estimation apparatus 900 and processed by the signal processing unit 190.
  • FIG. 8 is a block diagram showing a configuration example of a general signal processing unit.
  • the signal processing unit 190 shown in FIG. 8 includes an image acquisition unit 191, a feature point tracking unit 193, a position / posture estimation unit 194, a map generation unit 195, and a minimization unit 196.
  • the video data transmitted to the self-position estimation apparatus 900 is input to the image acquisition unit 191 of the signal processing unit 190.
  • the video data input to the image acquisition unit 191 is all input to the feature point tracking unit 193.
  • the feature point tracking unit 193 performs a process of acquiring the movement amount of the feature point on the input video data.
  • the position / posture estimation unit 194 estimates the camera position and the camera attitude using the acquired movement amount of the feature point.
  • the map generation unit 195 estimates the three-dimensional position of the feature point, and locally generates a map of the environment.
  • the minimization unit 196 then aligns the map with the overall view of the locally generated environment by minimizing the reprojection error in which the entire image was used. With the matching performed by the minimization unit 196, the self position based on the video data is estimated.
  • map generation unit 195 locally generates a map of the environment, distortion may occur if the generated map of the environment is applied as it is to the entire image.
  • the minimization unit 196 matches the overall image so that distortion does not occur even if the generated environment map is overlooked.
  • Patent No. 5629390 Patent No. 588 1 743 JP-A-2015-508163 Unexamined-Japanese-Patent No. 2010-152787 JP, 2016-157197, A Unexamined-Japanese-Patent No. 2017-021427
  • the mobile robot system described in Patent Document 1 predicts a hazard on the basis of the acquired image with reference to information measured in the past.
  • the self-position re-estimation method described in Patent Document 2 returns from the estimation failure with reference to the information measured in the past based on the acquired image.
  • the sensor position estimation method described in patent document 3 performs self-position estimation evaluation etc. with reference to the information measured in the past based on the acquired image.
  • Patent documents 4 to 6 describe techniques for limiting the feature points to be referred to.
  • Patent Document 4 describes an environment map that causes a computer to execute processing of generating a map of the environment by estimating the current position and the position of the measurement point based on the measurement value of the measurement point measured while autonomously moving in a predetermined environment. The generator is described.
  • the environmental map generation program described in Patent Document 4 causes the computer to delete the position information corresponding to the measurement point determined to be the exercise measurement point from the storage means.
  • Patent Document 5 describes a self-position estimation device capable of accurately estimating a self-position by generating a map of an initial environment used to estimate the self-position.
  • the self-position estimation apparatus described in Patent Document 5 extracts a feature point of an object from a time-series image input from an image input unit, and calculates a distance from the image input unit to the feature point And a moving object removing unit configured to remove an object corresponding to the feature point from the time-series image when it is determined that the feature point is a moving object.
  • Patent Document 6 describes a self-position estimation apparatus that improves self-position estimation accuracy.
  • the self-position estimation apparatus described in Patent Document 6 includes feature point extraction means for extracting actual feature points from the image acquired by the image acquisition means, and motion from among the actual feature points extracted by the feature point extraction means. And feature point removing means for removing the dynamic feature points of the target object.
  • this invention aims at providing the self-position estimation method, self-position estimation apparatus, and a self-position estimation program which can estimate self-position more rapidly which solves the subject mentioned above.
  • a self-position estimation method is a self-position estimation method executed in a self-position estimation apparatus in which an imaging device is mounted, and for estimating the self-position of the self-position estimation device from a moving image acquired from the imaging device Extracting feature point image candidates, which are photographed images of feature points that are regions in a moving image to be used, by a predetermined method, and estimating a self position using the extracted feature point image candidates It features.
  • the self-position estimation apparatus is a self-position estimation apparatus on which an imaging apparatus is mounted, and a region in a moving image used to estimate the self-position of the self-position estimation apparatus from a moving image acquired from the imaging apparatus.
  • Providing an extraction unit for extracting a feature point image candidate, which is a captured image of the feature point, using a predetermined method, and an estimation unit for estimating a self position using the extracted feature point image candidate It features.
  • a self-position estimation program is a self-position estimation program executed on a computer equipped with an imaging device, and is used by the computer to estimate the self-position of the computer from a moving image acquired from the imaging device. Extraction processing for extracting feature point image candidates, which are images in which a feature point that is a region in a moving image is captured, by a predetermined method, and estimation processing for estimating a self position using the extracted feature point image candidates To execute.
  • FIG. 1 is a block diagram showing a configuration example of a first embodiment of a self-position estimation system according to the present invention.
  • FIG. 2 is a block diagram showing another configuration example of the first embodiment of the self-position estimation system according to the present invention.
  • the self-location estimation system of the present embodiment provides a video selection method used to select a key frame to be referenced.
  • the self-position estimation system of the present embodiment efficiently extracts an image in which a feature point suitable for reference is captured from video data captured by a camera, and compares feature points in the extracted image. By extracting an image in which feature points suitable for reference are captured, the self-position estimation system of the present embodiment realizes high-speed self-position estimation.
  • the self-position estimation system 10 of this embodiment includes a self-position estimation apparatus 100 and a camera 200.
  • the structure of the self-position estimation system 10 of this embodiment is the same as the structure of the self-position estimation system 90 shown in FIG.
  • the self-position estimation system 11 of the present embodiment includes a self-position estimation apparatus 100 and a recording medium 210.
  • the structure of the self-position estimation system 11 of this embodiment is the same as that of the self-position estimation system 91 shown in FIG. 7 except the self-position estimation apparatus 100.
  • self-position estimation apparatus 100 shown in FIGS. 1 and 2 includes signal processing section 110 instead of signal processing section 190.
  • signal processing unit 110 the configuration of the signal processing unit 110 will be described with reference to FIG.
  • FIG. 3 is a block diagram showing a configuration example of the signal processing unit 110 according to the first embodiment.
  • the signal processing unit 110 shown in FIG. 3 includes an image acquisition unit 111, a filtering unit 112, a feature point tracking unit 113, a position / posture estimation unit 114, a map generation unit 115, a minimization unit 116, and a learning unit. And 117.
  • the configuration of the signal processing unit 110 is the same as the configuration of the signal processing unit 190 shown in FIG. 8 except for the filtering unit 112 and the learning unit 117.
  • the functions of the image acquisition unit 111, the feature point tracking unit 113, the position / posture estimation unit 114, the map generation unit 115, and the minimization unit 116 are the same as the image acquisition unit 191, the feature point tracking unit 193, the position / posture
  • the function is the same as that of the estimation unit 194, the map generation unit 195, and the minimization unit 196.
  • the filtering unit 112 has a function of extracting, from the video data input from the image acquisition unit 111, an image in which an area which may be used as a feature point is captured.
  • an image obtained by capturing a feature point is referred to as a feature point image. That is, the filtering unit 112 extracts candidates for feature point images.
  • the candidates for the feature point image extracted by the filtering unit 112 are used for the self position estimation process.
  • the candidate of the image extracted by the filtering unit 112 is not necessarily adopted as the feature point image each time.
  • a difference equal to or greater than a predetermined threshold occurs between the image candidate to be extracted and the reference point (feature point)
  • an area suitable as the reference point is registered as a new reference point.
  • the acquired image is divided into an image adopted as a feature point image and an image not adopted as a feature point image each time.
  • the minimization unit 116 of the present embodiment inputs the two types of images distributed as described above to the learning unit 117.
  • the learning unit 117 has a function of acquiring a feature amount of an image adopted as a feature point image and a feature amount of an image not adopted as a feature point image by performing learning processing.
  • the learning unit 117 inputs the feature amount obtained by the learning process to the filtering unit 112.
  • the filtering unit 112 extracts candidate feature point images using the input feature amount. That is, in the signal processing unit 110 of the present embodiment, an image to be extracted as a feature point image candidate is selected in the learning process.
  • the learning unit 117 learns the feature amount of the feature point image based on the image used for the feature point image in the self position estimation process. Further, the learning unit 117 learns the feature amount of the image not suitable for the feature point image based on the image not used for the feature point image in the self position estimation process.
  • the filtering unit 112 may exclude the image not suitable for the feature point image from the reference image in the self position estimation process using the learned feature amount.
  • a model of a feature of an image in which an area useful as a reference point (feature point) is captured may be input in advance as a selection criterion.
  • the filtering unit 112 may extract feature point image candidates using the input selection criteria.
  • the self-position estimation apparatus 100 patterns an image in which an appropriate feature point required for the estimation of the self-position is captured by setting a selection criterion or the like.
  • the pattern of the image in which the appropriate feature point is captured is grasped in advance, it is referred to in comparison with the case where the types and patterns of the feature point extracted because all the acquired images are referred become enormous. Because the image is limited, computation time and cost are reduced. That is, when the self-position estimation apparatus 100 of this embodiment is used, the calculation time required for self-position estimation is shortened, so that self-position estimation and environment mapping are performed at higher speed.
  • the self-position estimation apparatus 100 of the present embodiment can eliminate the instability of the reference image because it selects the image in which the appropriate reference point (feature point) is captured. That is, when the self-position estimation apparatus 100 of this embodiment is used, the estimation accuracy of the self-position can be further enhanced.
  • FIG. 4 is a flowchart showing the operation of the self-position estimation process by the self-position estimation apparatus 100 according to the first embodiment.
  • video data transmitted to the self-position estimation apparatus 100 is input to the image acquisition unit 111 of the signal processing unit 110 (step S101).
  • the image acquisition unit 111 inputs the input video data to the filtering unit 112.
  • the filtering unit 112 extracts candidates for the feature point image from the input video data (step S102).
  • the filtering unit 112 inputs the extracted feature point image candidates to the feature point tracking unit 113.
  • the feature point tracking unit 113 acquires the movement amount of the feature point based on the input candidate of the feature point image (step S103).
  • the feature point tracking unit 113 inputs the acquired movement amount of the feature point to the position / posture estimation unit 114.
  • the feature point tracking unit 113 may acquire the movement amount of the feature point based on another image.
  • the position / posture estimation unit 114 estimates the camera position and the camera attitude using the input movement amount of the feature point (step S104).
  • the position / posture estimation unit 114 inputs the estimated camera position and camera orientation to the map generation unit 115.
  • the map generation unit 115 estimates the three-dimensional position of the feature point based on the input camera position and camera attitude, and generates a map of the environment locally (step S105).
  • the map generation unit 115 inputs the generated map of the environment to the minimization unit 116.
  • the minimization unit 116 matches the map of the locally generated environment with the overall image by minimizing the reprojection error in which the entire image is used (step S106). Further, the minimization unit 116 inputs, to the learning unit 117, an image adopted as a feature point image and an image not adopted as a feature point image.
  • the learning unit 117 performs the learning process based on the input image to acquire the feature amount of the image adopted as the feature point image and the feature amount of the image not adopted as the feature point image. (Step S107).
  • the learning unit 117 inputs the acquired feature amount to the filtering unit 112 (step S108).
  • the self-position estimation apparatus 100 ends the self-position estimation process.
  • the self-position estimation apparatus 100 of this embodiment may repeat and perform the self-position estimation process shown in FIG.
  • the self-position estimation apparatus 100 selects an image in which an area which may be used as a feature point is captured from the video used for self-position estimation captured by a camera, and the selected image Extract
  • the learning unit 117 of the self-position estimation apparatus 100 performs learning processing of the feature amount of an image in which an area which may be used as a feature point or line feature is captured based on a video (environment) used for self-position estimation.
  • Get in The filtering unit 112 selects a stable feature point or line feature from the image in which the natural feature is captured, using the feature amount of the acquired image, and uses the selected feature point or line feature as a reference point be able to.
  • the self-position estimation apparatus 100 of this embodiment refers to An appropriate feature point can be selected as a point, and the selected feature point can be extracted.
  • the self-position estimation apparatus 100 of this embodiment When the self-position estimation apparatus 100 of this embodiment is used, it is not necessary to consider all feature points in the image. That is, since only an image appropriate as a feature point image is focused, the calculation time and calculation cost required for self-position estimation are reduced.
  • a feature of an image suitable as a reference image is that an invariant reference object such as a bridge pier of a highway is photographed.
  • a feature of an image inappropriate as a reference image is that a reference object which may be moved, such as a car parked on the road, is photographed.
  • the filtering unit 112 selects and uses video information that is easily used as a reference image, for which an invariant reference target is photographed. That is, the self-position estimation apparatus 100 according to the present embodiment can reduce the calculation cost for extracting feature points in an image. When the self-position estimation apparatus 100 of the present embodiment is used, the calculation time required for self-position estimation is shortened, so that self-position estimation and environment mapping are performed at higher speed.
  • the self-position estimation apparatus 100 can eliminate the instability of the reference image because it selects the image for which the appropriate reference point has been captured. That is, when the self-position estimation apparatus 100 of this embodiment is used, the estimation accuracy of the self-position can be further enhanced.
  • the self-position estimation apparatus 100 of the present embodiment can reduce the calculation time required for self-position estimation as compared with a general self-position estimation method, and therefore can estimate the self-position faster.
  • the self-position estimation apparatus 100 according to the present embodiment can eliminate the instability of the reference image because it selects the image for which the appropriate reference point has been captured. That is, the self-position estimation apparatus 100 can further improve the self-position estimation accuracy.
  • the self-position estimation apparatus 100 of this embodiment may be implement
  • each part in the self-position estimation apparatus 100 of this embodiment may be implement
  • the image acquisition unit 111, the filtering unit 112, the feature point tracking unit 113, the position / posture estimation unit 114, the map generation unit 115, the minimization unit 116, and the learning unit 117 are realized by LSI (Large Scale Integration). Be done. Also, they may be realized by one LSI.
  • FIG. 5 is a block diagram showing an outline of a self-position estimation apparatus according to the present invention.
  • the self-position estimation device 20 is a self-position estimation device on which the imaging device 21 (for example, the camera 200) is mounted.
  • An extracting unit 22 (for example, a filtering unit 112) which extracts a candidate of a feature point image which is a captured image of a feature point which is a region in a moving image used for estimation, and the extracted feature points It comprises an estimation unit 23 (for example, a feature point tracking unit 113, a position / posture estimation unit 114, a map generation unit 115, and a minimization unit 116) that estimates a self position using image candidates.
  • the self-position estimation apparatus can estimate its own position more quickly.
  • the self-position estimation apparatus 20 includes an acquisition unit (for example, a learning unit 117) that executes learning processing to acquire information of an image used for extraction of a feature point image candidate, and the extraction unit 22 acquires The information on the selected image may be used to extract feature point image candidates.
  • an acquisition unit for example, a learning unit 117
  • the extraction unit 22 acquires The information on the selected image may be used to extract feature point image candidates.
  • the self position estimation apparatus can extract an image learned by the learning process as a feature point image candidate.
  • the acquisition unit may execute the learning process based on an image used as a feature point image among the candidates for the feature point image and used to estimate a self position.
  • the self position estimation apparatus can learn the feature amount of the feature point image.
  • the acquisition unit executes a learning process based on an image other than the image used for estimating the self position as a feature point image among the candidates for the feature point image
  • the extraction unit 22 acquires the image acquired by the learning process After excluding the region indicated by the information from the moving image acquired from the imaging device 21, candidates for the feature point image may be extracted.
  • the self-position estimation apparatus can estimate its own position more quickly.
  • the extraction unit 22 may also extract feature point image candidates using a model of a region that can be used as a feature point.
  • the self-position estimation apparatus can extract candidates for the feature point image on which the content designated by the user in advance is reflected.
  • the self-position estimation apparatus 20 includes a storage unit (for example, the recording medium 210) that stores a moving image acquired from the imaging device 21, and the extraction unit 22 extracts feature points from the moving image stored in the storage unit. Image candidates may be extracted.
  • a storage unit for example, the recording medium 210
  • the extraction unit 22 extracts feature points from the moving image stored in the storage unit. Image candidates may be extracted.
  • the self-position estimation apparatus can estimate the past self-position based on a moving image of which a predetermined time has elapsed since being photographed.
  • the present invention is suitably applied to an application in which a robot typified by an unmanned flight robot grasps its own space and immediately detects and avoids danger.
  • the present invention is suitably applied when a robot autonomously performs a task in a place where it is difficult for a person to enter or a dangerous place, and a vast range that can not be handled manually.
  • the present invention is industrially effectively applied to autonomous inspections by robots of infrastructures and plants that deteriorate with age.
  • the present invention is applied, the labor, cost, and time required for autonomous inspection may be reduced.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A self-location estimation device 20 has an image capture device 21 mounted thereon, and comprises: an extraction unit 22 that extracts, using a prescribed method and from video obtained from the image capture device 21, candidates for a characteristic point image that is an image in which a characteristic point is captured, the characteristic point being a region in the video used for estimating the self-location of the self-location estimation device 20; and an estimation unit 23 that estimates the self-location using the extracted candidates for a characteristic point image.

Description

自己位置推定方法、自己位置推定装置および自己位置推定プログラムSelf position estimation method, self position estimation device and self position estimation program
 本発明は、自己位置推定方法、自己位置推定装置および自己位置推定プログラムに関し、特にカメラで撮影された映像が用いられる自己位置推定方法、自己位置推定装置および自己位置推定プログラムに関する。 The present invention relates to a self position estimation method, a self position estimation device and a self position estimation program, and more particularly to a self position estimation method, a self position estimation device and a self position estimation program in which an image captured by a camera is used.
 画像情報に基づいた撮像装置の位置および姿勢の計測は、拡張現実感(AR:Augmented Reality)または複合現実感(MR:Mixed Reality)における現実空間と仮想物体との位置合わせ、ロボットや自動車の自己位置推定、物体やシーンの三次元モデリング等に利用される。 Measurement of the position and orientation of an imaging device based on image information is performed by aligning the real space with a virtual object in Augmented Reality (AR: Augmented Reality) or Mixed Reality (MR); It is used for position estimation, 3D modeling of objects and scenes, etc.
 非特許文献1には、シーン中の特徴点の情報を三次元マップとして保持し、画像中に検出される特徴点と三次元マップ中の特徴点との対応を基に撮像装置の位置および姿勢を推定する方法が記載されている。 In Non-Patent Document 1, the information on feature points in a scene is held as a three-dimensional map, and the position and orientation of the imaging device are determined based on the correspondence between feature points detected in an image and feature points in the three-dimensional map. The method of estimating is described.
 非特許文献1に記載されている方法は、前フレームにおいて計測された撮像装置の位置および姿勢を、現フレームにおける撮像装置の位置および姿勢の計測に利用する。 The method described in Non-Patent Document 1 uses the position and orientation of the imaging device measured in the previous frame to measure the position and orientation of the imaging device in the current frame.
 例えば、非特許文献1に記載されている方法は、前フレームにおいて計測された撮像装置の位置および姿勢と動きモデルとに基づいて、現フレームにおける撮像装置の位置および姿勢を予測する。非特許文献1に記載されている方法は、予測された撮像装置の位置および姿勢を、三次元マップ中の特徴点と画像中の特徴点との対応付けに利用する。 For example, the method described in Non-Patent Document 1 predicts the position and orientation of the imaging device in the current frame based on the position and orientation of the imaging device measured in the previous frame and the motion model. The method described in Non-Patent Document 1 uses the predicted position and orientation of the imaging device to associate feature points in a three-dimensional map with feature points in an image.
 また、非特許文献1に記載されている方法は、予測された撮像装置の位置および姿勢を、現フレームにおける撮像装置の位置および姿勢を求めるために繰り返し実行される計算処理の初期値として利用する。 Also, the method described in Non-Patent Document 1 uses the predicted position and orientation of the imaging device as an initial value of calculation processing that is repeatedly executed to obtain the position and orientation of the imaging device in the current frame. .
 フレーム間で撮像装置の動きが大きい場合や画像において検出される特徴点の数が少ない場合、非特許文献1に記載されている方法による撮像装置の位置および姿勢の計測は失敗する可能性がある。 If the movement of the imaging device is large between frames or the number of feature points detected in an image is small, measurement of the position and orientation of the imaging device by the method described in Non-Patent Document 1 may fail. .
 計測を失敗させないためには、繰り返し実行される計算処理に使われる特徴点として自己位置推定に適した画像を用いることが重要である。例えば、画像中の移動しない特徴点を自己位置推定の過程で求める方法や、画像中の移動する特徴点を排除することによって適切な特徴点を取得する方法がある。 In order to prevent measurement failure, it is important to use an image suitable for self-position estimation as a feature point used for repetitively executed calculation processing. For example, there is a method of obtaining non-moving feature points in an image in the process of self-position estimation, or a method of obtaining an appropriate feature point by excluding moving feature points in an image.
 上記の観点を踏まえて、例えば特許文献1には、ロボットが望ましくない事象(例えば、偶発的な衝突)に遭遇したとき、危険の前のデータ瞬間が学習アルゴリズムに識別されるように、特別な「危険」タグがデータセットに挿入される可動式ロボットシステムが記載されている。 In view of the above aspect, for example, Patent Document 1 is special in that when a robot encounters an undesirable event (e.g. an accidental collision), data moments before danger are identified in the learning algorithm. A mobile robotic system is described in which a "danger" tag is inserted into the data set.
 また、特許文献2には、自己位置推定に失敗した場合を考慮して、キーフレームに姿勢検出器が適用された結果と現在のビューとが比較される自己位置再推定方法が記載されている。 Further, Patent Document 2 describes a self-position re-estimation method in which a result of applying a posture detector to a key frame is compared with a current view in consideration of a case where self-position estimation fails. .
 特許文献2に記載されている自己位置再推定方法には、現在のフレームおよび以前のフレームに対して機械学習分類器等による形状推定のプロセスや、意味的画像ラベル付けのプロセスが適用される等の特徴が見受けられる。 In the self-position re-estimation method described in Patent Document 2, a process of shape estimation by a machine learning classifier or the like, a process of semantic image labeling, etc. are applied to the current frame and the previous frame, etc. The characteristic of is seen.
 また、特許文献3には、検知されたシーンから抽出された特徴点のセットが以前に検知および保存された経験のうち任意の1つの経験中で認識される場合、認識された経験に基づいて現時点での車両の位置を評価するセンサ位置推定方法が記載されている。 In addition, according to Patent Document 3, when a set of feature points extracted from a detected scene is recognized in any one of the experiences previously detected and stored, the recognized experience is used. A sensor position estimation method is described to assess the current position of the vehicle.
 上記の各自己位置推定技術が適用された装置の例を以下に示す。図6は、一般的な自己位置推定システムの構成例を示すブロック図である。図6に示すように、自己位置推定システム90には、カメラ200と、自己位置推定装置900とが含まれている。 An example of an apparatus to which each of the above self-position estimation techniques is applied is shown below. FIG. 6 is a block diagram showing a configuration example of a general self-position estimation system. As shown in FIG. 6, the self-position estimation system 90 includes a camera 200 and a self-position estimation apparatus 900.
 また、図6に示すように、自己位置推定装置900は、信号処理部190を含む。また、カメラ200は、自己位置推定装置900と通信可能に接続されている。カメラ200と自己位置推定装置900との接続は、有線接続でもよいし、無線接続でもよい。カメラ200で撮影された映像データは、自己位置推定装置900に送信される。 Further, as shown in FIG. 6, the self-position estimation apparatus 900 includes a signal processing unit 190. Also, the camera 200 is communicably connected to the self-position estimation apparatus 900. The connection between the camera 200 and the self-position estimation apparatus 900 may be a wired connection or a wireless connection. Video data captured by the camera 200 is transmitted to the self-position estimation apparatus 900.
 図7は、一般的な自己位置推定システムの他の構成例を示すブロック図である。図7に示す自己位置推定システム91は、移動体の軌跡のみを求める場合等、取得された映像データが即時に処理されなくてもよい場合に使用される。 FIG. 7 is a block diagram showing another configuration example of a general self-position estimation system. The self-position estimation system 91 shown in FIG. 7 is used when it is not necessary to process the acquired video data immediately, such as when only the trajectory of the moving object is obtained.
 図7に示すように、自己位置推定装置900は、カメラ200の代わりに記録媒体210と通信可能に接続されている。図7に示す例では、自己位置推定装置900に記録媒体210に保存されている映像データがカメラ200で撮影された映像データの代わりに送信される。 As shown in FIG. 7, the self-position estimation apparatus 900 is communicably connected to the recording medium 210 instead of the camera 200. In the example shown in FIG. 7, the video data stored in the recording medium 210 is transmitted to the self-position estimation apparatus 900 instead of the video data captured by the camera 200.
 図6および図7に示すように、カメラ200で撮影された映像データ、または記録媒体210に保存されている映像データが自己位置推定装置900に送信され、信号処理部190で処理される。 As shown in FIGS. 6 and 7, video data captured by the camera 200 or video data stored in the recording medium 210 is transmitted to the self-position estimation apparatus 900 and processed by the signal processing unit 190.
 次に、信号処理部190の構成を図8を参照して説明する。図8は、一般的な信号処理部の構成例を示すブロック図である。図8に示す信号処理部190は、画像取得部191と、特徴点追跡部193と、位置・姿勢推定部194と、地図生成部195と、最小化部196とを有する。 Next, the configuration of the signal processing unit 190 will be described with reference to FIG. FIG. 8 is a block diagram showing a configuration example of a general signal processing unit. The signal processing unit 190 shown in FIG. 8 includes an image acquisition unit 191, a feature point tracking unit 193, a position / posture estimation unit 194, a map generation unit 195, and a minimization unit 196.
 自己位置推定装置900に送信された映像データは、信号処理部190の画像取得部191に入力される。画像取得部191に入力された映像データは、全て特徴点追跡部193に入力される。 The video data transmitted to the self-position estimation apparatus 900 is input to the image acquisition unit 191 of the signal processing unit 190. The video data input to the image acquisition unit 191 is all input to the feature point tracking unit 193.
 次いで、特徴点追跡部193は、入力された映像データに対して、特徴点の移動量を取得する処理を行う。次いで、位置・姿勢推定部194は、取得された特徴点の移動量を用いてカメラ位置およびカメラ姿勢を推定する。 Next, the feature point tracking unit 193 performs a process of acquiring the movement amount of the feature point on the input video data. Next, the position / posture estimation unit 194 estimates the camera position and the camera attitude using the acquired movement amount of the feature point.
 次いで、地図生成部195は、特徴点の三次元位置を推定し、環境の地図を局所的に生成する。次いで、最小化部196は、画像全体が用いられた再投影誤差を最小にすることによって、局所的に生成された環境の地図と全体像との整合をとる。最小化部196が整合をとることによって、映像データに基づいた自己位置が推定される。 Next, the map generation unit 195 estimates the three-dimensional position of the feature point, and locally generates a map of the environment. The minimization unit 196 then aligns the map with the overall view of the locally generated environment by minimizing the reprojection error in which the entire image was used. With the matching performed by the minimization unit 196, the self position based on the video data is estimated.
 地図生成部195は局所的に環境の地図を生成するため、生成された環境の地図が全体像にそのまま適用されると歪みが生じる場合がある。最小化部196は、生成される環境の地図が俯瞰されても歪みが生じないように全体像との整合をとっている。 Since the map generation unit 195 locally generates a map of the environment, distortion may occur if the generated map of the environment is applied as it is to the entire image. The minimization unit 196 matches the overall image so that distortion does not occur even if the generated environment map is overlooked.
特許第5629390号公報Patent No. 5629390 特許第5881743号公報Patent No. 588 1 743 特表2015-508163号公報JP-A-2015-508163 特開2010-152787号公報Unexamined-Japanese-Patent No. 2010-152787 特開2016-157197号公報JP, 2016-157197, A 特開2017-021427号公報Unexamined-Japanese-Patent No. 2017-021427
 特許文献1に記載されている可動式ロボットシステムは、取得された画像を基に、過去に計測された情報を参照して危険を予測する。また、特許文献2に記載されている自己位置再推定方法は、取得された画像を基に、過去に計測された情報を参照して推定の失敗から復帰する。また、特許文献3に記載されているセンサ位置推定方法は、取得された画像を基に、過去に計測された情報を参照して自己位置推定評価等を行う。 The mobile robot system described in Patent Document 1 predicts a hazard on the basis of the acquired image with reference to information measured in the past. Moreover, the self-position re-estimation method described in Patent Document 2 returns from the estimation failure with reference to the information measured in the past based on the acquired image. Moreover, the sensor position estimation method described in patent document 3 performs self-position estimation evaluation etc. with reference to the information measured in the past based on the acquired image.
 すなわち、上記の各自己位置推定技術では、参照される特徴点の種類やパターンが膨大であり、参照処理に多くの時間を要する。さらに、上記の各自己位置推定技術では、参照に適した特徴点の抽出処理にも多くのコストを要する。従って、上記の各自己位置推定技術が用いられて即時に自己位置が推定される場合、参照処理が短時間で実行されないという問題がある。 That is, in each self-position estimation technique described above, the types and patterns of feature points to be referenced are enormous, and it takes a lot of time for reference processing. Furthermore, in each of the above-described self-position estimation techniques, the process of extracting feature points suitable for reference also requires much cost. Therefore, there is a problem that reference processing is not performed in a short time when each self-position estimation technique described above is used to estimate self-position immediately.
 特許文献4~特許文献6には、参照される特徴点を限定する技術が記載されている。特許文献4には、所定の環境内を自律移動しながら計測した計測点の計測値に基づいて現在位置と計測点の位置を推定し、環境の地図を生成する処理をコンピュータに実行させる環境地図生成プログラムが記載されている。特許文献4に記載されている環境地図生成プログラムは、コンピュータに、運動計測点であると判定された計測点に対応する位置情報を記憶手段から削除させる。 Patent documents 4 to 6 describe techniques for limiting the feature points to be referred to. Patent Document 4 describes an environment map that causes a computer to execute processing of generating a map of the environment by estimating the current position and the position of the measurement point based on the measurement value of the measurement point measured while autonomously moving in a predetermined environment. The generator is described. The environmental map generation program described in Patent Document 4 causes the computer to delete the position information corresponding to the measurement point determined to be the exercise measurement point from the storage means.
 また、特許文献5には、自己位置の推定に用いる初期の環境の地図を生成して精度良く自己位置を推定できる自己位置推定装置が記載されている。特許文献5に記載されている自己位置推定装置は、画像入力部から入力された時系列画像から物体の特徴点を抽出して、画像入力部から特徴点までの距離を算出する距離算出部と、特徴点が移動物体であると判定した場合、特徴点に対応する物体を時系列画像から除去する移動物体除去部とを備える。 Further, Patent Document 5 describes a self-position estimation device capable of accurately estimating a self-position by generating a map of an initial environment used to estimate the self-position. The self-position estimation apparatus described in Patent Document 5 extracts a feature point of an object from a time-series image input from an image input unit, and calculates a distance from the image input unit to the feature point And a moving object removing unit configured to remove an object corresponding to the feature point from the time-series image when it is determined that the feature point is a moving object.
 また、特許文献6には、自己位置推定精度を向上させる自己位置推定装置が記載されている。特許文献6に記載されている自己位置推定装置は、画像取得手段により取得された画像から実特徴点を抽出する特徴点抽出手段と、特徴点抽出手段により抽出された実特徴点の中から動的物体の動的特徴点を除去する特徴点除去手段とを備える。 Further, Patent Document 6 describes a self-position estimation apparatus that improves self-position estimation accuracy. The self-position estimation apparatus described in Patent Document 6 includes feature point extraction means for extracting actual feature points from the image acquired by the image acquisition means, and motion from among the actual feature points extracted by the feature point extraction means. And feature point removing means for removing the dynamic feature points of the target object.
 上記のように、参照および抽出される特徴点を限定する技術は、既に提供されている。しかし、自己位置推定がより高速に実行されるためには、推定に使用される映像データ自体の量を減らすことが効果的であると考えられる。 As mentioned above, techniques for limiting the feature points to be referenced and extracted have already been provided. However, in order for self-position estimation to be performed faster, it is considered effective to reduce the amount of video data itself used for estimation.
[発明の目的]
 そこで、本発明は、上述した課題を解決する、より高速に自己位置を推定できる自己位置推定方法、自己位置推定装置および自己位置推定プログラムを提供することを目的とする。
[Object of the invention]
Then, this invention aims at providing the self-position estimation method, self-position estimation apparatus, and a self-position estimation program which can estimate self-position more rapidly which solves the subject mentioned above.
 本発明による自己位置推定方法は、撮像装置が搭載された自己位置推定装置において実行される自己位置推定方法であって、撮像装置から取得された動画像から自己位置推定装置の自己位置の推定に利用される動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出し、抽出された特徴点画像の候補を用いて自己位置を推定することを特徴とする。 A self-position estimation method according to the present invention is a self-position estimation method executed in a self-position estimation apparatus in which an imaging device is mounted, and for estimating the self-position of the self-position estimation device from a moving image acquired from the imaging device Extracting feature point image candidates, which are photographed images of feature points that are regions in a moving image to be used, by a predetermined method, and estimating a self position using the extracted feature point image candidates It features.
 本発明による自己位置推定装置は、撮像装置が搭載された自己位置推定装置であって、撮像装置から取得された動画像から自己位置推定装置の自己位置の推定に利用される動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出する抽出部と、抽出された特徴点画像の候補を用いて自己位置を推定する推定部とを備えることを特徴とする。 The self-position estimation apparatus according to the present invention is a self-position estimation apparatus on which an imaging apparatus is mounted, and a region in a moving image used to estimate the self-position of the self-position estimation apparatus from a moving image acquired from the imaging apparatus. Providing an extraction unit for extracting a feature point image candidate, which is a captured image of the feature point, using a predetermined method, and an estimation unit for estimating a self position using the extracted feature point image candidate It features.
 本発明による自己位置推定プログラムは、撮像装置が搭載されたコンピュータにおいて実行される自己位置推定プログラムであって、コンピュータに、撮像装置から取得された動画像からコンピュータの自己位置の推定に利用される動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出する抽出処理、および抽出された特徴点画像の候補を用いて自己位置を推定する推定処理を実行させることを特徴とする。 A self-position estimation program according to the present invention is a self-position estimation program executed on a computer equipped with an imaging device, and is used by the computer to estimate the self-position of the computer from a moving image acquired from the imaging device. Extraction processing for extracting feature point image candidates, which are images in which a feature point that is a region in a moving image is captured, by a predetermined method, and estimation processing for estimating a self position using the extracted feature point image candidates To execute.
 本発明によれば、より高速に自己位置を推定できる。 According to the present invention, it is possible to estimate the self position faster.
本発明による自己位置推定システムの第1の実施形態の構成例を示すブロック図である。It is a block diagram showing an example of composition of a 1st embodiment of a self-position estimating system by the present invention. 本発明による自己位置推定システムの第1の実施形態の他の構成例を示すブロック図である。It is a block diagram which shows the other structural example of 1st Embodiment of the self-position estimation system by this invention. 第1の実施形態の信号処理部110の構成例を示すブロック図である。It is a block diagram showing an example of composition of signal processing section 110 of a 1st embodiment. 第1の実施形態の自己位置推定装置100による自己位置推定処理の動作を示すフローチャートである。It is a flowchart which shows operation | movement of the self-position estimation process by the self-position estimation apparatus 100 of 1st Embodiment. 本発明による自己位置推定装置の概要を示すブロック図である。It is a block diagram which shows the outline | summary of the self-position estimation apparatus by this invention. 一般的な自己位置推定システムの構成例を示すブロック図である。It is a block diagram which shows the structural example of a general self-position estimation system. 一般的な自己位置推定システムの他の構成例を示すブロック図である。It is a block diagram which shows the other structural example of a general self-position estimation system. 一般的な信号処理部の構成例を示すブロック図である。It is a block diagram which shows the example of a structure of a general signal processing part.
[第1の実施の形態]
[構造の説明]
 以下、本発明の実施形態を、図面を参照して説明する。図1は、本発明による自己位置推定システムの第1の実施形態の構成例を示すブロック図である。また、図2は、本発明による自己位置推定システムの第1の実施形態の他の構成例を示すブロック図である。
First Embodiment
[Description of structure]
Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram showing a configuration example of a first embodiment of a self-position estimation system according to the present invention. FIG. 2 is a block diagram showing another configuration example of the first embodiment of the self-position estimation system according to the present invention.
 本実施形態の自己位置推定システムは、参照されるキーフレームの選択に使用される映像選択方法を提供する。本実施形態の自己位置推定システムは、参照に適した特徴点が撮影された画像をカメラで撮影された映像データから効率的に抽出し、抽出された画像中の特徴点同士を比較する。参照に適した特徴点が撮影された画像を抽出することによって、本実施形態の自己位置推定システムは、自己位置推定の高速化を実現する。 The self-location estimation system of the present embodiment provides a video selection method used to select a key frame to be referenced. The self-position estimation system of the present embodiment efficiently extracts an image in which a feature point suitable for reference is captured from video data captured by a camera, and compares feature points in the extracted image. By extracting an image in which feature points suitable for reference are captured, the self-position estimation system of the present embodiment realizes high-speed self-position estimation.
 図1に示すように、本実施形態の自己位置推定システム10には、自己位置推定装置100と、カメラ200とが含まれている。本実施形態の自己位置推定システム10の構成は、自己位置推定装置100を除いて図6に示す自己位置推定システム90の構成と同様である。 As shown in FIG. 1, the self-position estimation system 10 of this embodiment includes a self-position estimation apparatus 100 and a camera 200. The structure of the self-position estimation system 10 of this embodiment is the same as the structure of the self-position estimation system 90 shown in FIG.
 また、図2に示すように、本実施形態の自己位置推定システム11には、自己位置推定装置100と、記録媒体210とが含まれている。本実施形態の自己位置推定システム11の構成は、自己位置推定装置100を除いて図7に示す自己位置推定システム91の構成と同様である。 Further, as shown in FIG. 2, the self-position estimation system 11 of the present embodiment includes a self-position estimation apparatus 100 and a recording medium 210. The structure of the self-position estimation system 11 of this embodiment is the same as that of the self-position estimation system 91 shown in FIG. 7 except the self-position estimation apparatus 100.
 図1および図2に示す自己位置推定装置100は、図6および図7に示す自己位置推定装置900と異なり、信号処理部190の代わりに信号処理部110を含む。以下、信号処理部110の構成を図3を参照して説明する。 Unlike self-position estimation apparatus 900 shown in FIGS. 6 and 7, self-position estimation apparatus 100 shown in FIGS. 1 and 2 includes signal processing section 110 instead of signal processing section 190. Hereinafter, the configuration of the signal processing unit 110 will be described with reference to FIG.
 図3は、第1の実施形態の信号処理部110の構成例を示すブロック図である。図3に示す信号処理部110は、画像取得部111と、フィルタリング部112と、特徴点追跡部113と、位置・姿勢推定部114と、地図生成部115と、最小化部116と、学習部117とを有する。 FIG. 3 is a block diagram showing a configuration example of the signal processing unit 110 according to the first embodiment. The signal processing unit 110 shown in FIG. 3 includes an image acquisition unit 111, a filtering unit 112, a feature point tracking unit 113, a position / posture estimation unit 114, a map generation unit 115, a minimization unit 116, and a learning unit. And 117.
 本実施形態の信号処理部110の構成は、フィルタリング部112および学習部117を除いて図8に示す信号処理部190の構成と同様である。また、画像取得部111、特徴点追跡部113、位置・姿勢推定部114、地図生成部115、および最小化部116がそれぞれ有する機能は、画像取得部191、特徴点追跡部193、位置・姿勢推定部194、地図生成部195、および最小化部196がそれぞれ有する機能と同様である。 The configuration of the signal processing unit 110 according to the present embodiment is the same as the configuration of the signal processing unit 190 shown in FIG. 8 except for the filtering unit 112 and the learning unit 117. The functions of the image acquisition unit 111, the feature point tracking unit 113, the position / posture estimation unit 114, the map generation unit 115, and the minimization unit 116 are the same as the image acquisition unit 191, the feature point tracking unit 193, the position / posture The function is the same as that of the estimation unit 194, the map generation unit 195, and the minimization unit 196.
 フィルタリング部112は、画像取得部111から入力された映像データから、特徴点として使用される可能性がある領域が撮影された画像を抽出する機能を有する。以下、特徴点が撮影された画像を特徴点画像という。すなわち、フィルタリング部112は、特徴点画像の候補を抽出する。 The filtering unit 112 has a function of extracting, from the video data input from the image acquisition unit 111, an image in which an area which may be used as a feature point is captured. Hereinafter, an image obtained by capturing a feature point is referred to as a feature point image. That is, the filtering unit 112 extracts candidates for feature point images.
 フィルタリング部112が抽出した特徴点画像の候補は、自己位置推定処理に用いられる。しかし、本実施形態ではフィルタリング部112が抽出した画像の候補が毎回特徴点画像として採用されるとは限られない。抽出される画像の候補と参照点(特徴点)との間に所定の閾値以上の差が生じた場合、参照点として適切な領域が新たな参照点として登録される。 The candidates for the feature point image extracted by the filtering unit 112 are used for the self position estimation process. However, in the present embodiment, the candidate of the image extracted by the filtering unit 112 is not necessarily adopted as the feature point image each time. When a difference equal to or greater than a predetermined threshold occurs between the image candidate to be extracted and the reference point (feature point), an area suitable as the reference point is registered as a new reference point.
 本実施形態の信号処理部110では、取得される画像が毎回、特徴点画像に採用された画像と、特徴点画像に採用されなかった画像とに振り分けられる。本実施形態の最小化部116は、上記のように振り分けられた二種類の画像を学習部117に入力する。 In the signal processing unit 110 of the present embodiment, the acquired image is divided into an image adopted as a feature point image and an image not adopted as a feature point image each time. The minimization unit 116 of the present embodiment inputs the two types of images distributed as described above to the learning unit 117.
 学習部117は、特徴点画像に採用された画像の特徴量、および特徴点画像に採用されなかった画像の特徴量を、学習処理を実行することによってそれぞれ取得する機能を有する。 The learning unit 117 has a function of acquiring a feature amount of an image adopted as a feature point image and a feature amount of an image not adopted as a feature point image by performing learning processing.
 学習部117は、学習処理で得られた特徴量を、フィルタリング部112に入力する。フィルタリング部112は、入力された特徴量を用いて特徴点画像の候補を抽出する。すなわち、本実施形態の信号処理部110では、特徴点画像の候補として抽出される画像が学習処理で選択される。 The learning unit 117 inputs the feature amount obtained by the learning process to the filtering unit 112. The filtering unit 112 extracts candidate feature point images using the input feature amount. That is, in the signal processing unit 110 of the present embodiment, an image to be extracted as a feature point image candidate is selected in the learning process.
 学習部117は、特徴点画像の特徴量を、自己位置推定処理で特徴点画像に用いられた画像を基に学習する。また、学習部117は、特徴点画像に適さない画像の特徴量を、自己位置推定処理で特徴点画像に用いられなかった画像を基に学習する。フィルタリング部112は、学習された特徴量を用いて、特徴点画像に適さない画像を自己位置推定処理における参照画像から排除してもよい。 The learning unit 117 learns the feature amount of the feature point image based on the image used for the feature point image in the self position estimation process. Further, the learning unit 117 learns the feature amount of the image not suitable for the feature point image based on the image not used for the feature point image in the self position estimation process. The filtering unit 112 may exclude the image not suitable for the feature point image from the reference image in the self position estimation process using the learned feature amount.
 なお、フィルタリング部112には、参照点(特徴点)として有用な領域が撮影された画像の特徴のモデルが選択基準として予め入力されてもよい。フィルタリング部112は、入力された選択基準を用いて特徴点画像の候補を抽出してもよい。 Note that, to the filtering unit 112, a model of a feature of an image in which an area useful as a reference point (feature point) is captured may be input in advance as a selection criterion. The filtering unit 112 may extract feature point image candidates using the input selection criteria.
 以上のように、本実施形態の自己位置推定装置100は、選択基準の設定等により自己位置の推定に要する適切な特徴点が撮影された画像を予めパターン化する。 As described above, the self-position estimation apparatus 100 according to the present embodiment patterns an image in which an appropriate feature point required for the estimation of the self-position is captured by setting a selection criterion or the like.
 一般的な自己位置推定技術では取得される画像が全て参照されるため、抽出される特徴点の種類やパターンが膨大になる。すなわち、参照処理の実行に多くの時間を要するため、高速に自己位置が推定される場合に参照処理が短時間で終了しないという問題がある。 In a general self-position estimation technique, all the acquired images are referred to, so the types and patterns of feature points to be extracted become enormous. That is, since it takes much time to execute the reference process, there is a problem that the reference process does not end in a short time when the self position is estimated at high speed.
 適切な特徴点が撮影された画像のパターンが予め把握されると、取得される全ての画像が参照されるために抽出される特徴点の種類やパターンが膨大になる場合と比べて参照される画像が限定されるため、計算時間および計算コストが低減される。すなわち、本実施形態の自己位置推定装置100が使用されると、自己位置推定に掛かる計算時間が短縮されるため、より高速に自己位置推定や環境マッピングが実行される。 When the pattern of the image in which the appropriate feature point is captured is grasped in advance, it is referred to in comparison with the case where the types and patterns of the feature point extracted because all the acquired images are referred become enormous. Because the image is limited, computation time and cost are reduced. That is, when the self-position estimation apparatus 100 of this embodiment is used, the calculation time required for self-position estimation is shortened, so that self-position estimation and environment mapping are performed at higher speed.
 また、本実施形態の自己位置推定装置100は、適切な参照点(特徴点)が撮影された画像を選択するため、参照画像の不安定さを排除できる。すなわち、本実施形態の自己位置推定装置100が使用されると、自己位置の推定精度がより高められる。 In addition, the self-position estimation apparatus 100 of the present embodiment can eliminate the instability of the reference image because it selects the image in which the appropriate reference point (feature point) is captured. That is, when the self-position estimation apparatus 100 of this embodiment is used, the estimation accuracy of the self-position can be further enhanced.
[動作の説明]
 以下、本実施形態の自己位置推定装置100が自己位置を推定する動作を図4を参照して説明する。図4は、第1の実施形態の自己位置推定装置100による自己位置推定処理の動作を示すフローチャートである。
[Description of operation]
Hereinafter, an operation in which the self-position estimation apparatus 100 of the present embodiment estimates the self-position will be described with reference to FIG. FIG. 4 is a flowchart showing the operation of the self-position estimation process by the self-position estimation apparatus 100 according to the first embodiment.
 最初に、自己位置推定装置100に送信された映像データが、信号処理部110の画像取得部111に入力される(ステップS101)。画像取得部111は、入力された映像データをフィルタリング部112に入力する。 First, video data transmitted to the self-position estimation apparatus 100 is input to the image acquisition unit 111 of the signal processing unit 110 (step S101). The image acquisition unit 111 inputs the input video data to the filtering unit 112.
 次いで、フィルタリング部112は、入力された映像データから、特徴点画像の候補を抽出する(ステップS102)。フィルタリング部112は、抽出された特徴点画像の候補を特徴点追跡部113に入力する。 Next, the filtering unit 112 extracts candidates for the feature point image from the input video data (step S102). The filtering unit 112 inputs the extracted feature point image candidates to the feature point tracking unit 113.
 次いで、特徴点追跡部113は、入力された特徴点画像の候補を基に特徴点の移動量を取得する(ステップS103)。特徴点追跡部113は、取得された特徴点の移動量を位置・姿勢推定部114に入力する。 Next, the feature point tracking unit 113 acquires the movement amount of the feature point based on the input candidate of the feature point image (step S103). The feature point tracking unit 113 inputs the acquired movement amount of the feature point to the position / posture estimation unit 114.
 なお、入力された特徴点画像の候補に撮影された特徴点が適切でないと判断した場合、特徴点追跡部113は、別の画像を基に特徴点の移動量を取得してもよい。 If it is determined that the captured feature point is not appropriate as a candidate of the input feature point image, the feature point tracking unit 113 may acquire the movement amount of the feature point based on another image.
 次いで、位置・姿勢推定部114は、入力された特徴点の移動量を用いてカメラ位置およびカメラ姿勢を推定する(ステップS104)。位置・姿勢推定部114は、推定されたカメラ位置およびカメラ姿勢を地図生成部115に入力する。 Next, the position / posture estimation unit 114 estimates the camera position and the camera attitude using the input movement amount of the feature point (step S104). The position / posture estimation unit 114 inputs the estimated camera position and camera orientation to the map generation unit 115.
 次いで、地図生成部115は、入力されたカメラ位置およびカメラ姿勢を基に特徴点の三次元位置を推定し、環境の地図を局所的に生成する(ステップS105)。地図生成部115は、生成された環境の地図を最小化部116に入力する。 Next, the map generation unit 115 estimates the three-dimensional position of the feature point based on the input camera position and camera attitude, and generates a map of the environment locally (step S105). The map generation unit 115 inputs the generated map of the environment to the minimization unit 116.
 次いで、最小化部116は、画像全体が用いられた再投影誤差を最小にすることによって、局所的に生成された環境の地図と全体像との整合をとる(ステップS106)。また、最小化部116は、特徴点画像として採用された画像と、特徴点画像として採用されなかった画像とを学習部117に入力する。 Then, the minimization unit 116 matches the map of the locally generated environment with the overall image by minimizing the reprojection error in which the entire image is used (step S106). Further, the minimization unit 116 inputs, to the learning unit 117, an image adopted as a feature point image and an image not adopted as a feature point image.
 次いで、学習部117は、入力された画像に基づいて学習処理を行うことによって、特徴点画像として採用された画像の特徴量、および特徴点画像として採用されなかった画像の特徴量をそれぞれ取得する(ステップS107)。 Next, the learning unit 117 performs the learning process based on the input image to acquire the feature amount of the image adopted as the feature point image and the feature amount of the image not adopted as the feature point image. (Step S107).
 次いで、学習部117は、取得された特徴量をフィルタリング部112に入力する(ステップS108)。入力した後、自己位置推定装置100は、自己位置推定処理を終了する。なお、本実施形態の自己位置推定装置100は、図4に示す自己位置推定処理を繰り返し実行してもよい。 Next, the learning unit 117 inputs the acquired feature amount to the filtering unit 112 (step S108). After the input, the self-position estimation apparatus 100 ends the self-position estimation process. In addition, the self-position estimation apparatus 100 of this embodiment may repeat and perform the self-position estimation process shown in FIG.
 [効果の説明]
 本実施形態の自己位置推定装置100は、カメラで撮影された自己位置推定に利用される映像から、特徴点として使用される可能性がある領域が撮影された画像を選択し、選択された画像を抽出する。
[Description of effect]
The self-position estimation apparatus 100 according to the present embodiment selects an image in which an area which may be used as a feature point is captured from the video used for self-position estimation captured by a camera, and the selected image Extract
 自己位置推定装置100の学習部117は、自己位置推定に利用される映像(環境)を基に特徴点または線特徴として使用される可能性がある領域が撮影された画像の特徴量を学習処理で取得する。フィルタリング部112は、取得された画像の特徴量を用いて、自然の特徴が撮影された映像からも安定した特徴点または線特徴を選択し、選択された特徴点または線特徴を参照点にすることができる。 The learning unit 117 of the self-position estimation apparatus 100 performs learning processing of the feature amount of an image in which an area which may be used as a feature point or line feature is captured based on a video (environment) used for self-position estimation. Get in The filtering unit 112 selects a stable feature point or line feature from the image in which the natural feature is captured, using the feature amount of the acquired image, and uses the selected feature point or line feature as a reference point be able to.
 取得される画像が全て使用されるために参照される特徴点の種類やパターンが膨大になる一般的な自己位置推定装置に対して、本実施形態の自己位置推定装置100は、映像内容から参照点として適切な特徴点を選択し、選択された特徴点を抽出できる。 In contrast to a general self-position estimation apparatus in which the types and patterns of feature points to be referred to are all used because all the acquired images are used, the self-position estimation apparatus 100 of this embodiment refers to An appropriate feature point can be selected as a point, and the selected feature point can be extracted.
 本実施形態の自己位置推定装置100が使用されると、映像内の全ての特徴点が検討されずに済む。すなわち、特徴点画像として適切な画像のみが着目されるため、自己位置推定に掛かる計算時間および計算コストが低減される。 When the self-position estimation apparatus 100 of this embodiment is used, it is not necessary to consider all feature points in the image. That is, since only an image appropriate as a feature point image is focused, the calculation time and calculation cost required for self-position estimation are reduced.
 例えば、インフラストラクチャ点検やプラント点検等において自己位置が複数の画像に渡って類似性の高い構造物を基に推定される場合、参照画像として適切な画像の特徴と、参照画像として不適切な画像の特徴は、明確に分離される。 For example, when the self position is estimated based on a highly similar structure across a plurality of images in infrastructure inspection, plant inspection, etc., an image feature suitable as a reference image and an image inappropriate as a reference image The features of are clearly separated.
 例えば、参照画像として適切な画像の特徴は、高速道路の橋脚等の不変な参照対象が撮影されていることである。また、例えば、参照画像として不適切な画像の特徴は、路上に駐車されている車等の移動する可能性がある参照対象が撮影されていることである。 For example, a feature of an image suitable as a reference image is that an invariant reference object such as a bridge pier of a highway is photographed. Further, for example, a feature of an image inappropriate as a reference image is that a reference object which may be moved, such as a car parked on the road, is photographed.
 本実施形態のフィルタリング部112は、不変な参照対象が撮影されている、参照画像として利用されやすい映像情報を選択して利用する。すなわち、本実施形態の自己位置推定装置100は、画像中の特徴点の抽出に掛かる計算コストを低減できる。本実施形態の自己位置推定装置100が使用されると、自己位置推定に掛かる計算時間が短縮されるため、より高速に自己位置推定や環境マッピングが実行される。 The filtering unit 112 according to the present embodiment selects and uses video information that is easily used as a reference image, for which an invariant reference target is photographed. That is, the self-position estimation apparatus 100 according to the present embodiment can reduce the calculation cost for extracting feature points in an image. When the self-position estimation apparatus 100 of the present embodiment is used, the calculation time required for self-position estimation is shortened, so that self-position estimation and environment mapping are performed at higher speed.
 また、本実施形態の自己位置推定装置100は、適切な参照点が撮影された画像を選択するため、参照画像の不安定さを排除できる。すなわち、本実施形態の自己位置推定装置100が使用されると、自己位置の推定精度がより高められる。 In addition, the self-position estimation apparatus 100 according to the present embodiment can eliminate the instability of the reference image because it selects the image for which the appropriate reference point has been captured. That is, when the self-position estimation apparatus 100 of this embodiment is used, the estimation accuracy of the self-position can be further enhanced.
 本実施形態の自己位置推定装置100は、一般的な自己位置推定方式に比べて、自己位置推定に掛かる計算時間を短縮できるため、より高速に自己位置を推定できる。また、本実施形態の自己位置推定装置100は、適切な参照点が撮影された画像を選択するため、参照画像の不安定さを排除できる。すなわち、自己位置推定装置100は、自己位置の推定精度をより高めることもできる。 The self-position estimation apparatus 100 of the present embodiment can reduce the calculation time required for self-position estimation as compared with a general self-position estimation method, and therefore can estimate the self-position faster. In addition, the self-position estimation apparatus 100 according to the present embodiment can eliminate the instability of the reference image because it selects the image for which the appropriate reference point has been captured. That is, the self-position estimation apparatus 100 can further improve the self-position estimation accuracy.
 なお、本実施形態の自己位置推定装置100は、例えば、非一時的な記憶媒体に格納されているプログラムに従って処理を実行するCPU(Central Processing Unit)によって実現されてもよい。すなわち、画像取得部111、フィルタリング部112、特徴点追跡部113、位置・姿勢推定部114、地図生成部115、最小化部116、および学習部117は、例えば、プログラム制御に従って処理を実行するCPU によって実現されてもよい。 In addition, the self-position estimation apparatus 100 of this embodiment may be implement | achieved by CPU (Central Processing Unit) which performs a process according to the program stored in the non-temporary storage medium, for example. That is, for example, the image acquisition unit 111, the filtering unit 112, the feature point tracking unit 113, the position / posture estimation unit 114, the map generation unit 115, the minimization unit 116, and the learning unit 117 It may be realized by
 また、本実施形態の自己位置推定装置100における各部は、ハードウェア回路によって実現されてもよい。一例として、画像取得部111、フィルタリング部112、特徴点追跡部113、位置・姿勢推定部114、地図生成部115、最小化部116、および学習部117が、それぞれLSI(Large Scale Integration)で実現される。また、それらが1つのLSI で実現されていてもよい。 Moreover, each part in the self-position estimation apparatus 100 of this embodiment may be implement | achieved by a hardware circuit. As an example, the image acquisition unit 111, the filtering unit 112, the feature point tracking unit 113, the position / posture estimation unit 114, the map generation unit 115, the minimization unit 116, and the learning unit 117 are realized by LSI (Large Scale Integration). Be done. Also, they may be realized by one LSI.
 次に、本発明の概要を説明する。図5は、本発明による自己位置推定装置の概要を示すブロック図である。本発明による自己位置推定装置20は、撮像装置21(例えば、カメラ200)が搭載された自己位置推定装置であって、撮像装置21から取得された動画像から自己位置推定装置20の自己位置の推定に利用される動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出する抽出部22(例えば、フィルタリング部112)と、抽出された特徴点画像の候補を用いて自己位置を推定する推定部23(例えば、特徴点追跡部113、位置・姿勢推定部114、地図生成部115、最小化部116)とを備える。 Next, an outline of the present invention will be described. FIG. 5 is a block diagram showing an outline of a self-position estimation apparatus according to the present invention. The self-position estimation device 20 according to the present invention is a self-position estimation device on which the imaging device 21 (for example, the camera 200) is mounted. An extracting unit 22 (for example, a filtering unit 112) which extracts a candidate of a feature point image which is a captured image of a feature point which is a region in a moving image used for estimation, and the extracted feature points It comprises an estimation unit 23 (for example, a feature point tracking unit 113, a position / posture estimation unit 114, a map generation unit 115, and a minimization unit 116) that estimates a self position using image candidates.
 そのような構成により、自己位置推定装置は、より高速に自己位置を推定できる。 With such a configuration, the self-position estimation apparatus can estimate its own position more quickly.
 また、自己位置推定装置20は、学習処理を実行して特徴点画像の候補の抽出に使用される画像の情報を取得する取得部(例えば、学習部117)を備え、抽出部22は、取得された画像の情報を使用して特徴点画像の候補を抽出してもよい。 In addition, the self-position estimation apparatus 20 includes an acquisition unit (for example, a learning unit 117) that executes learning processing to acquire information of an image used for extraction of a feature point image candidate, and the extraction unit 22 acquires The information on the selected image may be used to extract feature point image candidates.
 そのような構成により、自己位置推定装置は、学習処理で学習された画像を特徴点画像の候補として抽出できる。 With such a configuration, the self position estimation apparatus can extract an image learned by the learning process as a feature point image candidate.
 また、取得部は、特徴点画像の候補のうち特徴点画像として自己位置の推定に用いられた画像に基づいて学習処理を実行してもよい。 In addition, the acquisition unit may execute the learning process based on an image used as a feature point image among the candidates for the feature point image and used to estimate a self position.
 そのような構成により、自己位置推定装置は、特徴点画像の特徴量を学習できる。 With such a configuration, the self position estimation apparatus can learn the feature amount of the feature point image.
 また、取得部は、特徴点画像の候補のうち特徴点画像として自己位置の推定に用いられた画像以外の画像に基づいて学習処理を実行し、抽出部22は、学習処理で取得された画像の情報が示す領域を撮像装置21から取得された動画像から排除した上で特徴点画像の候補を抽出してもよい。 In addition, the acquisition unit executes a learning process based on an image other than the image used for estimating the self position as a feature point image among the candidates for the feature point image, and the extraction unit 22 acquires the image acquired by the learning process After excluding the region indicated by the information from the moving image acquired from the imaging device 21, candidates for the feature point image may be extracted.
 そのような構成により、自己位置推定装置は、より高速に自己位置を推定できる。 With such a configuration, the self-position estimation apparatus can estimate its own position more quickly.
 また、抽出部22は、特徴点として使用可能な領域のモデルを用いて特徴点画像の候補を抽出してもよい。 The extraction unit 22 may also extract feature point image candidates using a model of a region that can be used as a feature point.
 そのような構成により、自己位置推定装置は、利用者が予め指定した内容が反映された特徴点画像の候補を抽出できる。 With such a configuration, the self-position estimation apparatus can extract candidates for the feature point image on which the content designated by the user in advance is reflected.
 また、自己位置推定装置20は、撮像装置21から取得された動画像を記憶する記憶部(例えば、記録媒体210)を備え、抽出部22は、記憶部に記憶されている動画像から特徴点画像の候補を抽出してもよい。 In addition, the self-position estimation apparatus 20 includes a storage unit (for example, the recording medium 210) that stores a moving image acquired from the imaging device 21, and the extraction unit 22 extracts feature points from the moving image stored in the storage unit. Image candidates may be extracted.
 そのような構成により、自己位置推定装置は、撮影されてから所定の時間が経過した動画像を基に過去の自己位置を推定できる。 With such a configuration, the self-position estimation apparatus can estimate the past self-position based on a moving image of which a predetermined time has elapsed since being photographed.
 以上、実施形態および実施例を参照して本願発明を説明したが、本願発明は上記実施形態および実施例に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 Although the present invention has been described above with reference to the embodiments and the examples, the present invention is not limited to the above embodiments and the examples. The configurations and details of the present invention can be modified in various ways that those skilled in the art can understand within the scope of the present invention.
産業上の利用の可能性Industrial Applicability
 本発明は、無人飛行ロボットに代表されるロボットが自ら空間を把握し、危険を即座に察知および回避する用途に好適に適用される。本発明は、人が立ち入ることが困難な場所または危険な場所、および人手で対処し切れない広大な範囲において、ロボットが自律してタスクを実行する際に好適に適用される。 The present invention is suitably applied to an application in which a robot typified by an unmanned flight robot grasps its own space and immediately detects and avoids danger. The present invention is suitably applied when a robot autonomously performs a task in a place where it is difficult for a person to enter or a dangerous place, and a vast range that can not be handled manually.
 特に、本発明は、経年劣化するインフラストラクチャやプラントのロボットによる自律点検等に産業上有効に適用される。本発明が適用されると、自律点検に掛かる人手、コスト、および時間が削減される可能性がある。 In particular, the present invention is industrially effectively applied to autonomous inspections by robots of infrastructures and plants that deteriorate with age. When the present invention is applied, the labor, cost, and time required for autonomous inspection may be reduced.
10、11、90、91 自己位置推定システム
20、100、900 自己位置推定装置
21 撮像装置
22 抽出部
23 推定部
110、190 信号処理部
111、191 画像取得部
112 フィルタリング部
113、193 特徴点追跡部
114、194 位置・姿勢推定部
115、195 地図生成部
116、196 最小化部
117 学習部
200 カメラ
210 記録媒体
10, 11, 90, 91 Self- position estimation system 20, 100, 900 Self-position estimation device 21 Imaging device 22 Extraction unit 23 Estimation unit 110, 190 Signal processing unit 111, 191 Image acquisition unit 112 Filtering unit 113, 193 Feature point tracking Unit 114, 194 Position / posture estimation unit 115, 195 Map generation unit 116, 196 Minimizing unit 117 Learning unit 200 Camera 210 Recording medium

Claims (10)

  1.  撮像装置が搭載された自己位置推定装置において実行される自己位置推定方法であって、
     前記撮像装置から取得された動画像から前記自己位置推定装置の自己位置の推定に利用される前記動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出し、
     抽出された特徴点画像の候補を用いて前記自己位置を推定する
     ことを特徴とする自己位置推定方法。
    A self position estimation method to be executed in a self position estimation device equipped with an imaging device, comprising:
    A method of selecting candidate feature point images, which are images in which feature points that are regions in the moving image used for the estimation of the self position of the self position estimation device from the moving image acquired from the imaging device Extracted by
    A self-position estimation method comprising: estimating the self-position using the extracted feature point image candidates.
  2.  学習処理を実行して特徴点画像の候補の抽出に使用される画像の情報を取得し、
     取得された画像の情報を使用して特徴点画像の候補を抽出する
     請求項1記載の自己位置推定方法。
    Execute learning processing to acquire information of an image used to extract feature point image candidates,
    The self-position estimation method according to claim 1, wherein candidates for a feature point image are extracted using information of the acquired image.
  3.  特徴点画像の候補のうち特徴点画像として自己位置の推定に用いられた画像に基づいて学習処理を実行する
     請求項2記載の自己位置推定方法。
    The self-position estimation method according to claim 2, wherein learning processing is performed based on an image used to estimate a self-position as a feature-point image among candidate feature-point images.
  4.  特徴点画像の候補のうち特徴点画像として自己位置の推定に用いられた画像以外の画像に基づいて学習処理を実行し、
     前記学習処理で取得された画像の情報が示す領域を撮像装置から取得された動画像から排除した上で特徴点画像の候補を抽出する
     請求項2または請求項3記載の自己位置推定方法。
    The learning process is executed based on an image other than the image used for estimating the self position as a feature point image among the candidate for the feature point image,
    The method according to claim 2 or 3, wherein the candidate of the feature point image is extracted after excluding the region indicated by the information of the image acquired by the learning process from the moving image acquired from the imaging device.
  5.  特徴点として使用可能な領域のモデルを用いて特徴点画像の候補を抽出する
     請求項1記載の自己位置推定方法。
    The self-position estimation method according to claim 1, wherein a candidate of a feature point image is extracted using a model of an area that can be used as a feature point.
  6.  撮像装置から取得された動画像を記憶し、
     記憶されている動画像から特徴点画像の候補を抽出する
     請求項1から請求項5のうちのいずれか1項に記載の自己位置推定方法。
    Store the moving image acquired from the imaging device,
    The self-position estimation method according to any one of claims 1 to 5, wherein a candidate of a feature point image is extracted from a stored moving image.
  7.  撮像装置が搭載された自己位置推定装置であって、
     前記撮像装置から取得された動画像から前記自己位置推定装置の自己位置の推定に利用される前記動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出する抽出部と、
     抽出された特徴点画像の候補を用いて前記自己位置を推定する推定部とを備える
     ことを特徴とする自己位置推定装置。
    It is a self-position estimation device equipped with an imaging device,
    A method of selecting candidate feature point images, which are images in which feature points that are regions in the moving image used for the estimation of the self position of the self position estimation device from the moving image acquired from the imaging device The extraction unit to extract
    And an estimation unit configured to estimate the self position using the extracted candidate feature point images.
  8.  学習処理を実行して特徴点画像の候補の抽出に使用される画像の情報を取得する取得部を備え、
     抽出部は、取得された画像の情報を使用して特徴点画像の候補を抽出する
     請求項7記載の自己位置推定装置。
    An acquisition unit that executes learning processing to acquire information of an image used to extract feature point image candidates;
    The self-position estimation apparatus according to claim 7, wherein the extraction unit extracts candidates of the feature point image using information of the acquired image.
  9.  撮像装置が搭載されたコンピュータにおいて実行される自己位置推定プログラムであって、
     前記コンピュータに、
     前記撮像装置から取得された動画像から前記コンピュータの自己位置の推定に利用される前記動画像中の領域である特徴点が撮影された画像である特徴点画像の候補を所定の方法で抽出する抽出処理、および
     抽出された特徴点画像の候補を用いて前記自己位置を推定する推定処理
     を実行させるための自己位置推定プログラム。
    A self position estimation program executed in a computer equipped with an imaging device, comprising:
    On the computer
    The candidate of the feature point image which is an image in which the feature point which is a region in the moving image used for the estimation of the self position of the computer is extracted from the moving image acquired from the imaging device by a predetermined method A self position estimation program for executing an extraction process and an estimation process of estimating the self position using candidates of the extracted feature point image.
  10.  コンピュータに、
     学習処理を実行して特徴点画像の候補の抽出に使用される画像の情報を取得する取得処理を実行させ、
     抽出処理で、取得された画像の情報を使用して特徴点画像の候補を抽出させる
     請求項9記載の自己位置推定プログラム。
    On the computer
    Execute an acquisition process of acquiring information of an image used to extract a candidate of a feature point image by executing a learning process;
    The self position estimation program according to claim 9, wherein the extraction process extracts candidate of the feature point image using information of the acquired image.
PCT/JP2017/022968 2017-06-22 2017-06-22 Self-location estimation method, self-location estimation device, and self-location estimation program WO2018235219A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019524792A JPWO2018235219A1 (en) 2017-06-22 2017-06-22 Self-location estimation method, self-location estimation device, and self-location estimation program
PCT/JP2017/022968 WO2018235219A1 (en) 2017-06-22 2017-06-22 Self-location estimation method, self-location estimation device, and self-location estimation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/022968 WO2018235219A1 (en) 2017-06-22 2017-06-22 Self-location estimation method, self-location estimation device, and self-location estimation program

Publications (1)

Publication Number Publication Date
WO2018235219A1 true WO2018235219A1 (en) 2018-12-27

Family

ID=64737007

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/022968 WO2018235219A1 (en) 2017-06-22 2017-06-22 Self-location estimation method, self-location estimation device, and self-location estimation program

Country Status (2)

Country Link
JP (1) JPWO2018235219A1 (en)
WO (1) WO2018235219A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019082670A1 (en) * 2017-10-25 2020-11-12 ソニー株式会社 Information processing equipment, information processing methods, programs, and mobiles
WO2021095463A1 (en) * 2019-11-13 2021-05-20 オムロン株式会社 Self-position estimation model learning method, self-position estimation model learning device, self-position estimation model learning program, self-position estimation method, self-position estimation device, self-position estimation program, and robot
JP2021531524A (en) * 2019-06-14 2021-11-18 高麗大学校産学協力団Korea University Research And Business Foundation User pose estimation method and device using 3D virtual space model
WO2022185482A1 (en) * 2021-03-04 2022-09-09 株式会社ソシオネクスト Information processing device, information processing method, and program
US11915449B2 (en) 2019-06-14 2024-02-27 Korea University Research And Business Foundation Method and apparatus for estimating user pose using three-dimensional virtual space model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157093A (en) * 2008-12-26 2010-07-15 Toyota Central R&D Labs Inc Motion estimation device and program
JP2012138013A (en) * 2010-12-27 2012-07-19 Canon Inc Tracking device and control method therefor
JP2012238119A (en) * 2011-05-10 2012-12-06 Canon Inc Object recognition device, control method of object recognition device and program
JP2015001791A (en) * 2013-06-14 2015-01-05 株式会社ジオ技術研究所 Image analysis apparatus
JP2016502218A (en) * 2013-01-04 2016-01-21 クアルコム,インコーポレイテッド Mobile device based text detection and tracking

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157093A (en) * 2008-12-26 2010-07-15 Toyota Central R&D Labs Inc Motion estimation device and program
JP2012138013A (en) * 2010-12-27 2012-07-19 Canon Inc Tracking device and control method therefor
JP2012238119A (en) * 2011-05-10 2012-12-06 Canon Inc Object recognition device, control method of object recognition device and program
JP2016502218A (en) * 2013-01-04 2016-01-21 クアルコム,インコーポレイテッド Mobile device based text detection and tracking
JP2015001791A (en) * 2013-06-14 2015-01-05 株式会社ジオ技術研究所 Image analysis apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2019082670A1 (en) * 2017-10-25 2020-11-12 ソニー株式会社 Information processing equipment, information processing methods, programs, and mobiles
JP2021531524A (en) * 2019-06-14 2021-11-18 高麗大学校産学協力団Korea University Research And Business Foundation User pose estimation method and device using 3D virtual space model
JP7138361B2 (en) 2019-06-14 2022-09-16 高麗大学校産学協力団 User Pose Estimation Method and Apparatus Using 3D Virtual Space Model
US11915449B2 (en) 2019-06-14 2024-02-27 Korea University Research And Business Foundation Method and apparatus for estimating user pose using three-dimensional virtual space model
WO2021095463A1 (en) * 2019-11-13 2021-05-20 オムロン株式会社 Self-position estimation model learning method, self-position estimation model learning device, self-position estimation model learning program, self-position estimation method, self-position estimation device, self-position estimation program, and robot
JP2021077287A (en) * 2019-11-13 2021-05-20 オムロン株式会社 Self-position estimation model learning method, self-position estimation model learning device, self-position estimation model learning program, self-position estimation method, self-position estimation device, self-position estimation program, and robot
JP7322670B2 (en) 2019-11-13 2023-08-08 オムロン株式会社 Self-localization model learning method, self-localization model learning device, self-localization model learning program, self-localization method, self-localization device, self-localization program, and robot
WO2022185482A1 (en) * 2021-03-04 2022-09-09 株式会社ソシオネクスト Information processing device, information processing method, and program

Also Published As

Publication number Publication date
JPWO2018235219A1 (en) 2020-03-19

Similar Documents

Publication Publication Date Title
WO2018235219A1 (en) Self-location estimation method, self-location estimation device, and self-location estimation program
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
EP2959315B1 (en) Generation of 3d models of an environment
JP5647155B2 (en) Body feature detection and human pose estimation using inner distance shape relation
Lim et al. Dynamic appearance modeling for human tracking
JP2016099982A (en) Behavior recognition device, behaviour learning device, method, and program
US11580784B2 (en) Model learning device, model learning method, and recording medium
Masoumian et al. Absolute Distance Prediction Based on Deep Learning Object Detection and Monocular Depth Estimation Models.
Ham et al. Hand waving away scale
Schmuck et al. On the redundancy detection in keyframe-based slam
KR20200068709A (en) Human body identification methods, devices and storage media
US9836655B2 (en) Information processing apparatus, information processing method, and computer-readable medium
EP3913527A1 (en) Method and device for performing behavior prediction by using explainable self-focused attention
Mörwald et al. Advances in real-time object tracking: Extensions for robust object tracking with a Monte Carlo particle filter
CN117067261A (en) Robot monitoring method, device, equipment and storage medium
CN110458177B (en) Method for acquiring image depth information, image processing device and storage medium
US20230100238A1 (en) Methods and systems for determining the 3d-locations, the local reference frames and the grasping patterns of grasping points of an object
KR102186875B1 (en) Motion tracking system and method
Duffhauss et al. PillarFlowNet: A real-time deep multitask network for LiDAR-based 3D object detection and scene flow estimation
US20220198802A1 (en) Computer-implemental process monitoring method, device, system and recording medium
Russo et al. Blurring prediction in monocular slam
Yang et al. Locator slope calculation via deep representations based on monocular vision
Mörwald et al. Self-monitoring to improve robustness of 3D object tracking for robotics
Liu et al. Real time pose estimation based on extended Kalman filter for binocular camera
Krzeszowski et al. An approach for model-based 3D human pose tracking, animation and evaluation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17915122

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019524792

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17915122

Country of ref document: EP

Kind code of ref document: A1