JP3641335B2 - Position detection method using omnidirectional vision sensor - Google Patents

Position detection method using omnidirectional vision sensor Download PDF

Info

Publication number
JP3641335B2
JP3641335B2 JP32283396A JP32283396A JP3641335B2 JP 3641335 B2 JP3641335 B2 JP 3641335B2 JP 32283396 A JP32283396 A JP 32283396A JP 32283396 A JP32283396 A JP 32283396A JP 3641335 B2 JP3641335 B2 JP 3641335B2
Authority
JP
Japan
Prior art keywords
image
visual sensor
detection
omnidirectional
omnidirectional visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP32283396A
Other languages
Japanese (ja)
Other versions
JPH10160463A (en
Inventor
嶐一 岡
拓一 西村
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Priority to JP32283396A priority Critical patent/JP3641335B2/en
Publication of JPH10160463A publication Critical patent/JPH10160463A/en
Application granted granted Critical
Publication of JP3641335B2 publication Critical patent/JP3641335B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to a position detection method using an omnidirectional visual sensor for position detection using a mobile robot or the like.
[0002]
[Prior art]
Real-time processing is indispensable for position estimation and guidance of mobile robots in the real world such as offices and homes, and robustness against running disturbance (slip, shake, etc.) is required. In such a case, errors are likely to be accumulated in the position estimation method using only the inner sensor. Therefore, a method is used in which environmental information is obtained from an external sensor and the self-position is estimated by collation with an environmental model acquired in advance. As an external sensor, there are an ultrasonic sensor, a laser range finder, a visual sensor, and the like. Among them, a visual sensor capable of acquiring a lot of information is promising.
[0003]
[Problems to be solved by the invention]
However, in order to mount such a visual sensor on a mobile robot and perform position detection, an imaging plate having an optical feature at a specific position on the ground, for example, a barcode or a specific color describing a specific color, is provided. It must be installed so that it is within the field of view. Such a limitation not only increases the cost of the apparatus, but also causes a situation where the position cannot be detected in a place where the identification plate cannot be installed in a suitable place.
[0004]
Therefore, in view of the above points, the object of the present invention is accurate even if the identification plate is not installed or the restriction on the installation position of the identification plate is relaxed when performing position detection using a visual sensor. The object is to provide a position detection method using an omnidirectional visual sensor capable of performing position detection.
[0005]
[Means for Solving the Problems]
In order to achieve such an object, the invention of claim 1 is equipped with an omnidirectional visual sensor having a field of view in all directions in advance on a moving body, and moves the moving body on a specific route, Obtain an image of the environment around the moving body by the omnidirectional visual sensor as a standard pattern in time series,
The acquired image and position are associated in advance, and the acquired image of the omnidirectional visual sensor and the image of the standard pattern are displayed in the form of the feature amount of the image when the moving body is moving along an arbitrary route. If a matching image is obtained by comparison, the position associated with the matching standard pattern image is used as the detection result.
[0006]
The position detection method using an omnidirectional visual sensor according to claim 1, wherein the image feature is extracted from an image acquired by the omnidirectional visual sensor when the moving body moves on a specific route, and the image feature is extracted from the image. No standard pattern, and when comparing the acquired image of the omnidirectional visual sensor and the image of the standard pattern, an image feature is extracted from the acquired image of the omnidirectional visual sensor and the image feature and the image feature of the standard pattern It is characterized by comparing.
[0007]
The position detection method using an omnidirectional visual sensor according to claim 2, wherein the comparison of the image features is performed by a continuous DP matching method.
[0008]
According to a third aspect of the present invention, in the position detection method using the omnidirectional visual sensor according to the second aspect, the image features are compared by a non-monotonic continuous DP matching method.
[0011]
In the embodiment of the invention, the image comparison according to claim 1 is performed in the form of an image feature (feature vector and expression) as described in claim 2, and the position detection result is a standard pattern number that matches the image feature. It is expressed by v * (t) and frame number (τ * (t)).
[0012]
The continuous DP matching method according to claim 3 is expressed by the following equations (2) to (14). However, the difference from the conventional DP method is that processing for rearranging the image features of the standard pattern in the reverse order as described in claim 5 is included. , And is expressed by the description of the description of the optimum path using FIG. In particular, the above point will be understood by the parties from the fact that the last part (rτ to r r ) of the standard pattern is a search target for matching in the search area of FIG.
[0013]
DETAILED DESCRIPTION OF THE INVENTION
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
[0014]
In general, the environment model is described by a path model and a three-dimensional model of a work space and an object associated therewith. In this case, since a large amount of calculation is required for matching between the three-dimensional model and the visual information, an approximate position in the path model, that is, a “global position” is obtained before matching. (Here, the “global position” has a precision of about several meters, and the “local position” has a precision of about several centimeters).
[0015]
On the other hand, it is considered that the robot can be visually guided only by global position estimation by performing the following settings.
[0016]
Setting 1 Adoption of Environmental Model by Image Sequence As shown in FIG. 1, the environmental model is expressed as a topology map by the image sequence itself subjected to a simple compression process (without using a three-dimensional model). At this time, the positional relationship between the image series is unnecessary, that is, the travel information and the information of the internal sensor are not used. Also, information such as “here is Mr. A's seat” and “here is in front of the bookshelf” is stored along with the frame number of the image at the time of teaching the model.
[0017]
Setting 2 Obstacle avoidance is possible with an ultrasonic sensor or the like. We conduct guidance with an ultrasonic sensor and an infrared sensor using a robot called Nomad200 (see Reference 6).
[0018]
Setting 3 The accuracy of the running trajectory at the time of guidance may be low. If visual information indicates that the robot has deviated from the guidance route, it can be guided by turning back and selecting another route.
[0019]
Setting 4 Small change in environmental lighting The driving environment is indoors such as offices where the influence of light from windows is small.
[0020]
Under this condition setting, this report proposes a method for guiding the robot by the global position estimation method. However, we aim for a system configuration that can be extended to high-precision trajectory control and object recognition by local position estimation. Further, when estimating the global position, the following two processes are important.
[0021]
1. Feature extraction A feature vector is obtained from an input image by a simple compression process.
[0022]
2. Matching with the map Matching with the input image is performed to estimate the position in the map.
[0023]
In the present embodiment, a method that does not depend on the camera direction using an omnidirectional visual sensor is proposed as a feature extraction method, and a method that uses Non-monotonic Dynamic Programming (Non-monotonic CDP) as a matching method is proposed. To do.
[0024]
The field of view at the time of environmental photography can be broadly classified into those of a normal camera (see References 2 and 9) and those that capture all directions (panoramic view). The former method using the normal field of view not only makes it difficult to estimate the local position / direction, but also makes it difficult to estimate the global position when the direction of the robot is significantly different from the direction at the time of map creation.
[0025]
On the other hand, various omnidirectional visual sensors having an omnidirectional view have been studied (see Reference 3). If an image of 360 degrees around is used, a local direction and a global position can be easily estimated. In particular, an omnidirectional vision sensor (HyperOmni Vision) using a hyperboloidal mirror (see FIG. 2) proposed by Yamazawa et al. (See Reference 3) not only obtains an omnidirectional image including a lower visual field in real time. Image correction is also easy. Furthermore, unlike the former, there is an advantage that a local direction and a global position can be easily estimated.
[0026]
In order to estimate the global position, Zheng et al. (Ref. 4) used a vertical slit in the direction different from the traveling direction as the visual field. Information at each position is robust to vertical shaking of the camera (see FIG. 2) in order to integrate the pixel values in the slit for each RGB. However, there are concerns about the effects of shaking in the horizontal and rotational directions of the camera. Maeda et al. (See Reference 1) uses an omnidirectional image having an image size of 64 pixels in the vertical direction and 256 pixels in the horizontal direction (circumferential direction). Then, each horizontal line of the image is Fourier-transformed, and the position is estimated using the intensity component (64 × 32 pixels) on the low frequency side. This method can estimate the position regardless of the camera direction of the robot, but has a large amount of information to be stored. Bang et al. (See Reference 5) perform robot guidance using an omnidirectional sensor, but do not describe a global position estimation method.
[0027]
On the other hand, the present applicant has proposed a global position estimation method using an environment model based on a reduction image (a thinned image after smoothing) using a normal camera (see References 9 and 10), and a plurality of significant reductions. It was shown that the camera shake can be dealt with by using the obtained image (see Reference 11). Therefore, in this report, as shown in FIG. 2, the omnidirectional field of view is horizontally divided into N (N <about 10), and only 3N-dimensional information obtained by integrating the pixel values of each region for each RGB is used as a feature vector. A global position estimation method is proposed. (Here, it is also useful to perform isotropic filtering such as a Laplacian filter before dividing the field of view into N).
1. 1. Robust to camera shake 2. Does not depend on the camera direction of the robot. It has a feature that the amount of information to be stored is small. In particular, the feature 2 enables guidance by the Hillcrimb method or the like although the trajectory error is large.
[0028]
In order to examine the effectiveness of this method, a HyperOmni Vision (HOBI) proposed by Yamazawa et al. (See Reference 3) was installed in Nomad as an omnidirectional visual sensor (FIG. 3), and an experiment was conducted. We collected data by changing the conditions at three points in the office as follows and examined the influence on the feature vector.
[0029]
Change of camera position I moved Nomad about 3m in the passage.
[0030]
Camera rotation The stopped Nomad was rotated.
[0031]
Shake of the camera The stopped Nomad was shaken by hand to simulate vibration during driving.
[0032]
Environmental Change (Passing People) As shown in FIG. 4, a person passed near the stopped Nomad.
[0033]
First, the center position and radius of the omnidirectional image are obtained visually, the number N of horizontal divisions is set to 1, 3 and 16, and is divided into N concentric circles, averaged within each region, and a 3N-dimensional feature vector Asked. Next, the feature vector s 0 = (s 0 (1), s 0 (2),..., S 0 (3N)) at the reference point and the feature vector s = (s (1), s (2) when the condition changes ),..., S (3N)) to obtain a distance d and examine the change. The distance d was obtained by the following expression using the dimension number 3N of the feature vector.
[0034]
[Expression 1]
[0035]
FIG. 5 plots the effect of changes in camera position. When the position is 1 m away at 3 ≦ N, the distance d becomes an average of 100 or more, and it has been found that there is a position discrimination ability. However, the distance d becomes small at several points, and it has been found that position estimation using only information at one point is difficult.
[0036]
As an example in FIG. The effect of the distance d on the rotation and shaking of the camera in 1 and the passage of a person is plotted. The average values at the three points were 10 when the influence of shaking was N = 1, 20 when N = 16, 30 when the influence of person passing was N = 1, and 60 when N = 16. From this, it was found that when the camera position change is 1 m, the effect of passing the person is smaller. However, since the amount of experimental data is still small, it is necessary to conduct an evaluation experiment. From the above results, it was shown that the position can be estimated robustly with respect to rotation and shaking of the camera and changes in the environment. This is probably because the omnidirectional visual sensor has a wide field of view.
[0037]
If a similar image is included in the map, the position cannot be estimated with one input image. However, even in this case, the position can be estimated by matching a plurality of images in the map with a plurality of images being traveled. Here, in order to enable matching even when the robot travels at a speed different from that at the time of map creation, it is necessary to perform nonlinear matching by expanding / contracting the input pattern in terms of time.
[0038]
Zheng (see Reference 4) et al. Estimated the global position using dynamic programming, ie, Dynamic Programming (DP). In this method, nonlinear matching is possible, but since the start point and the end point need to be known, the robot must travel on the same route as the map. Matsumoto et al. (See Reference Document 2) uses a method in which a position where the distance between images is equal to or less than a certain threshold is a candidate, and the candidate position is narrowed down as the robot travels.
[0039]
On the other hand, in the matching method using Continuous DP (CDP) (see Reference Document 8), an optimal position can be estimated based on the principle of optimality, and the start point and end point of the route do not need to match the map. Furthermore, the applicant of the present application has proposed a position estimation method based on Reference Interval-Free CDP (RIFCDP) so that the position can be estimated even when the robot travels in an arbitrary partial section of the map (see Reference 9).
[0040]
RIFCDP is a method proposed in speech recognition (see Reference 7), and can detect a frame sequence of an arbitrary partial section length in a map in a spotting manner in synchronization with an input frame. The position can be estimated even when traveling in a partial section or when the traveling speed changes. Actually, matching is possible even if the input pattern has expansion and contraction of ½ times to 2 times in the forward direction (monotonic) with respect to the time axis direction. However, in the case of position estimation, the expansion / contraction of the input pattern is, for example, not less than −2 times and not more than 2 times when the robot is stopped or traveling in the reverse direction. (If there is a double speed limit when creating a map)
Therefore, in order to estimate the position, it is necessary to realize 'non-monotonic CDP' that can cope with stop and input in the reverse direction. Here, a method for realizing the “non-monotonic CDP” is proposed and its effectiveness is shown.
[0041]
Non-monotonic continuous DP will be described. One standard pattern Z is a series of feature vectors zτ.
[Expression 2]
[0043]
Represented by Here, the feature vector zτ has N as its dimensionality.
[Equation 3]
[0045]
It expresses. A similar feature vector series can be obtained from the input image as needed. This feature vector series is set to u t (0 ≦ t <∞), and a local distance d (t, τ) between u t and zτ is defined by the following expression.
[0046]
[Expression 4]
[0047]
In addition, the accumulated distance when the standard pattern having the point (t, τ) as an end point and the input sequence is optimally matched is represented by S (t, τ). In Non-monotonic continuous DP, S (t, τ) is updated with the following recurrence formula.
[0048]
[Equation 5]
[0049]
[Formula 6]
[0050]
Here, α is a normalization coefficient (0 ≦ α ≦ 1), and the following two terms are assumed to simplify the equation.
[0051]
(Assumption 1) The standard pattern can be expressed by a one-dimensional series of feature vectors. (This method can be extended to bifurcation and 2D series)
(Assumption 2) For the change in speed of the input pattern, an inclination pattern as shown in FIG. However, if the range of m in the expression (6) is changed, various restrictions can be added to the speed change of the input pattern (FIG. 7B).
[0052]
(5) Solving the recurrence formula of formula (6) gives the following formula.
[0053]
[Expression 7]
[0054]
Here, p (k) is defined as follows.
[0055]
[Equation 8]
[0056]
That is, the Non-monotonic continuous DP obtains a matching path having the minimum accumulated distance in the shaded area in FIG. 8 with the distance (t, τ) as the end point.
[0057]
In the well-known “continuous DP”, the end point has been (t, T). Moreover, the optimal path to it has been assumed to increase monotonously for t and τ in the (t, τ) plane. This depends on how the slope is taken. Therefore, “continuous DP” can be said to be monotonic in forming the optimum path. However, in the non-monotonic CDP, each point of (t−1, τ−1), (t−1, τ), (t−1, τ + 1) at (t, τ) as shown in FIG. From FIG. 8, the local optimum path is taken, and the optimum path in the (t, τ) plane does not monotonously increase with respect to τ as shown by the solid line in FIG. In this sense, what is proposed here is referred to as “Non-monotonic continuous DP”.
[0058]
If the weight for d (k, p (k)) in equation (6) is w (k), the sum of the weights w (k) is
[0059]
[Equation 9]
[0060]
Thus, it can be seen that the cumulative distance obtained by normalizing the sum of the weights w (k) to 1 at any t is obtained. This allows a comparison in a set {S (t, τ) | 1 ≦ τ ≦ T} of accumulated distances in a set of points {(t, τ) | 1 ≦ τ <T} at each t. Also, the shortest cumulative distances of different standard patterns can be compared. This shows that the sum of the weights is always normalized since the sum of the weights becomes α + (1−α) = 1 in the recurrence formula of the formula (6). (This is the same even when the normalization coefficient α changes with time.)
In FIG. 9, the value of the weighting coefficient w (k) when t and α are changed is plotted. The closer to the present time, the larger the value of the weight w (k). In particular, in a steady state where t is somewhat large, equation (7) is
[Expression 10]
[0062]
And can be simplified. At this time, the half-value width w 1/2 (α) of the weight coefficient w (k) is expressed as follows:
[Expression 11]
[0064]
When defining
[0065]
[Expression 12]
[0066]
Then, α can be determined from the half-value width w 1/2 (α). Table 1 shows an example of α and half width w 1/2 (α).
[0067]
[Table 1]
[0068]
Usually, when the change of the input feature vector is small, it is better to have a lot of past history (increase w 1/2 (α)). For this purpose, the normalization coefficient α can be changed with time so as to be proportional to the change of the feature vector. As an example, α (t) may be made variable as follows.
[0069]
[Formula 13]
[0070]
Here, u t ′ is a differential value of the input feature vector, and α 1 and α 2 are constants determined in consideration of the length T of the standard pattern.
[0071]
Here, it is assumed that there are L standard patterns, the cumulative distance of each pattern is S v (t, τ) (1 ≦ v ≦ L), the threshold is h v , and the number of frames of the standard pattern is T v . The output of the Non-monotonic continuous DP is the matched standard pattern number v * (t) and the matched frame number (τ * (t)) in the standard pattern,
[0072]
[Expression 14]
[0073]
It can be expressed. Here, Arg is a function that returns arguments {v (t), τ (t)}, and null represents an empty category.
[0074]
The experimental data was collected by running a robot with a remote control through a 100 meter passage in an office with a carpeted floor. At this time, the person was about 2 m away from the robot. The robot speed was converted to an image of size 120 × 160 at a maximum frame rate of 50 cm / sec and 2 Hz. The travel route at the time of input travels the same route in the forward and reverse directions, and stops at any time. In addition, there are places where a route deviated by about ± 30 cm from the route at the time of map creation is run.
[0075]
An example of the result of position estimation from this data using Non-monotonic CDP is shown in FIG. In the case of N = 4, the cases of α = 1.0 and α = 0.2 are compared. The bold line in FIG. 10A is the route actually traveled. Compared to the case where the position is estimated from only one piece of input image information (when α = 1.0), position estimation is performed by using about five past history images (when α = 0.2). You can see that is robust. It can also be seen that the position can be estimated even when the robot stops or runs backward.
[0076]
We proposed a feature extraction method for global position estimation using 3N-dimensional information obtained by horizontally dividing an omnidirectional image into N (N <10) and integrating pixel values of each region for each RGB. In addition, we proposed a matching method using non-monotonic continuous DP with less restrictions on the moving speed and direction of the robot, and demonstrated the effectiveness of this method through simple experiments.
[0077]
Reference 1 Maeda, H .; Ishiguro and S.I. Tuji, Memory-Based Navigation using Omni-direction View in Unknown Environment. IPSJ, CV-92, pp. 73-80, 1995
2 Y. Matsumoto, M .; Inaba and H.M. Inoue, Navigation of A Mobile Robot for Indoor Environments Base on Scene Image Sequence. MSJ Convention meeting, Vol. A, pp. 481-484, 1995
3 K. Yamazawa, Y .; Yagi and M.M. Yachida, Omni-directional Imaging with Hyperboloidal Projection. Proc. IEEE / RSJ Int. Conf. Intelligent Robots and Systems, 2, pp. 1029-1034, 1993.
4 J.H. Y. Zheng and S.M. Tsuji, Panoramic Representation for Route Recognition by a Mobile robot. International Journal of Computer Vision, 9: 1, pp. 55-76, 1992
5 S.M. W. Bang, W.M. Yu and M.M. J. et al. Chung, Sensor-Based Local Homing Using Omni-directional Ranging and Intensity Sensing System for Interior Mobile Robot Navigation. Precedings of the 1995 IEEE / RSJ International Conference on Intelligent Robots and Systems, Vol. 2, pp. 542-548, 1995
6 Andrews Held and Ryuichi Oka, “Sonar Based Map Acquisition and Exploration in an Unknown Office Environmental Rim.” 1995.
7 Yoshiaki Ito, Jiro Kiyama, Ryuichi Oka, “Reference Interval-free Continuous DP (RIFCDP) for Spotting by Arbitrary Intervals of Standard Patterns”, IEICE Technical Report, SP95-34, June 1995.
8 Ryuichi Oka, “Continuous Speech Recognition Using Continuous DP”, Acoustical Society of Speech, S78-20, pp. 145-152 (1978-06)
9 Hiroshi Kojima, Yoshiaki Ito, Ryuichi Oka, “Position Identification by Moving Image of Mobile Robot Using Reference Interval-free Continuous DP”, IEICE Tech. 139-144, July 1995.
10 Hiroshi Kojima, Yoshiaki Ito, Ryuichi Oka, “Position Identification System Using Time Series Images of Mobile Robot Using Reference Interval-free Continuous DP” Nishimura, H .; Kojima, A .; Held and R.C. Oka, Effect of Time-spatial Size of Motion Image for Localization by using the Spotting Method. ICPR '96
[0078]
【The invention's effect】
According to the first aspect of the present invention, since the image acquired by the omnidirectional visual sensor is an image of the omnidirectional (360 °) of the shooting position, the same image can be obtained regardless of the direction of the moving body, for example, the mobile robot. Therefore, even if the mobile robot travels along a route other than the route from which the standard pattern is acquired, the position can be detected by the matched image.
[0079]
In the invention of claim 2, since pattern matching is performed based on image features, the amount of data used for comparison can be reduced, and the position detection time can be shortened.
[0080]
In the present invention of claim 3, the detection accuracy is improved also with respect to the position detection of a single point on a particular route taken standard pattern.
[Brief description of the drawings]
FIG. 1 is an explanatory diagram showing an example of a topology map.
FIG. 2 is an explanatory diagram showing details of feature extraction processing;
FIG. 3 is a perspective view schematically showing the appearance of a mobile robot provided with an omnidirectional visual sensor.
FIG. 4 is a photograph showing an example of a captured omnidirectional image on a display.
FIG. 5 is an explanatory diagram showing the influence of a change in camera position.
FIG. 6 is an explanatory diagram showing the influence of a robot shake and a person.
FIG. 7 is an explanatory diagram showing an example of an inclination pattern of non-monotonic continuous DP.
FIG. 8 is an explanatory diagram showing a path search range of non-monotonic continuous DP.
FIG. 9 is an explanatory diagram showing values of weighting factors.
FIG. 10 is an explanatory diagram showing experimental results.
[Explanation of symbols]
1 Hyperboloid mirror 2 Camera

Claims (3)

  1. An omnidirectional visual sensor with a field of view in all directions is pre-installed on the moving object,
    The moving body is moved on a specific route, and an image of the environment around the moving body is acquired as a standard pattern in time series by the omnidirectional visual sensor,
    Associating the acquired image with the position in advance,
    When the moving body is moving along an arbitrary path, the acquired image of the omnidirectional visual sensor and the image of the standard pattern are compared in the form of feature amounts of the images ,
    A position detection method using an omnidirectional visual sensor, wherein when a matching image is obtained, a position associated with the matching standard pattern image is used as a detection result.
  2.   The position detection method using an omnidirectional visual sensor according to claim 1, wherein the image feature is extracted from an image acquired by the omnidirectional visual sensor when the moving body moves on a specific route, and the image feature is extracted from the image. No standard pattern, and when comparing the acquired image of the omnidirectional visual sensor and the image of the standard pattern, an image feature is extracted from the acquired image of the omnidirectional visual sensor and the image feature and the image feature of the standard pattern The position detection method using the omnidirectional visual sensor characterized by comparing these.
  3.   The position detection method using an omnidirectional visual sensor according to claim 2, wherein the comparison of the image features is performed by a non-monotonic continuous DP matching method.
JP32283396A 1996-12-03 1996-12-03 Position detection method using omnidirectional vision sensor Expired - Fee Related JP3641335B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP32283396A JP3641335B2 (en) 1996-12-03 1996-12-03 Position detection method using omnidirectional vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP32283396A JP3641335B2 (en) 1996-12-03 1996-12-03 Position detection method using omnidirectional vision sensor

Publications (2)

Publication Number Publication Date
JPH10160463A JPH10160463A (en) 1998-06-19
JP3641335B2 true JP3641335B2 (en) 2005-04-20

Family

ID=18148121

Family Applications (1)

Application Number Title Priority Date Filing Date
JP32283396A Expired - Fee Related JP3641335B2 (en) 1996-12-03 1996-12-03 Position detection method using omnidirectional vision sensor

Country Status (1)

Country Link
JP (1) JP3641335B2 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4448024B2 (en) * 2002-05-31 2010-04-07 富士通株式会社 Remote operation robot and robot self-position identification method
KR100966875B1 (en) 2006-09-26 2010-06-29 삼성전자주식회사 Localization method for robot by omni-directional image
KR100941418B1 (en) 2007-03-20 2010-02-11 삼성전자주식회사 A localization method of moving robot
KR100926760B1 (en) 2007-12-17 2009-11-16 삼성전자주식회사 Location recognition and mapping method of mobile robot
KR100988568B1 (en) 2008-04-30 2010-10-18 삼성전자주식회사 Robot and method for building map of the same
JP5505723B2 (en) * 2010-03-31 2014-05-28 アイシン・エィ・ダブリュ株式会社 Image processing system and positioning system
US9794519B2 (en) 2010-12-20 2017-10-17 Nec Corporation Positioning apparatus and positioning method regarding a position of mobile object
KR101678203B1 (en) * 2015-09-08 2016-12-07 (주)에스엠인스트루먼트 Acoustic Camera

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0854927A (en) * 1994-08-10 1996-02-27 Kawasaki Heavy Ind Ltd Landmark deciding method and device
JPH08178654A (en) * 1994-12-27 1996-07-12 Mitsubishi Electric Corp Orientation device
JPH08322833A (en) * 1995-05-31 1996-12-10 Shimadzu Corp X-ray diaphragm device

Also Published As

Publication number Publication date
JPH10160463A (en) 1998-06-19

Similar Documents

Publication Publication Date Title
Zhang et al. Visual-lidar odometry and mapping: Low-drift, robust, and fast
Veľas et al. Calibration of rgb camera with velodyne lidar
Van den Bergh et al. Real-time 3D hand gesture interaction with a robot for understanding directions from humans
Nedevschi et al. Stereo-based pedestrian detection for collision-avoidance applications
Franz et al. Where did I take that snapshot? Scene-based homing by image matching
US7787013B2 (en) Monitor system and camera
US7899209B2 (en) Statistical modeling and performance characterization of a real-time dual camera surveillance system
US5777690A (en) Device and method for detection of moving obstacles
CN104700414B (en) A kind of road ahead pedestrian&#39;s fast ranging method based on vehicle-mounted binocular camera
Haritaoglu et al. W 4 s: A real-time system for detecting and tracking people in 2 1/2d
US7729512B2 (en) Stereo image processing to detect moving objects
Senior et al. Acquiring multi-scale images by pan-tilt-zoom control and automatic multi-camera calibration
Lorigo et al. Visually-guided obstacle avoidance in unstructured environments
US4969036A (en) System for computing the self-motion of moving images devices
Hadsell et al. Deep belief net learning in a long-range vision system for autonomous off-road driving
Saeedi et al. Vision-based 3-D trajectory tracking for unknown environments
KR100773184B1 (en) Autonomously moving robot
Yamazawa et al. Obstacle detection with omnidirectional image sensor hyperomni vision
KR101645722B1 (en) Unmanned aerial vehicle having Automatic Tracking and Method of the same
JP4672175B2 (en) Position detection apparatus, position detection method, and position detection program
Ahmed et al. A robust features-based person tracker for overhead views in industrial environment
Yagi et al. Real-time omnidirectional image sensor (COPIS) for vision-guided navigation
Ferrier et al. Real-time traffic monitoring.
US7672503B2 (en) Direction-recognizing apparatus, direction-recognizing method, direction-recognizing system, and robot apparatus
Dickmanns et al. An integrated spatio-temporal approach to automatic visual guidance of autonomous vehicles

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20040611

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20040625

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20040823

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20040914

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20041115

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20041115

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20041224

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20050121

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080128

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090128

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100128

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110128

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120128

Year of fee payment: 7

LAPS Cancellation because of no payment of annual fees