US20110205338A1 - Apparatus for estimating position of mobile robot and method thereof - Google Patents
Apparatus for estimating position of mobile robot and method thereof Download PDFInfo
- Publication number
- US20110205338A1 US20110205338A1 US12/929,414 US92941411A US2011205338A1 US 20110205338 A1 US20110205338 A1 US 20110205338A1 US 92941411 A US92941411 A US 92941411A US 2011205338 A1 US2011205338 A1 US 2011205338A1
- Authority
- US
- United States
- Prior art keywords
- point
- patch
- points
- edge
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 239000013598 vector Substances 0.000 claims description 31
- 239000000284 extract Substances 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- One or more embodiments relate to an apparatus and method for estimating a position of a mobile robot, and particularly, to an apparatus and method for estimating a position of a mobile robot, in which the mobile robot estimates its own position by use of a distance sensor.
- the range image is formed of a set of three-dimensional (3D) data points and represents the free surface of an object at different points of view.
- ICP has garnered a large amount of interest in the machine vision field since its inception.
- the purpose of ICP is to search a transformation matrix capable of matching a range data set of a range data coordinate system to a model data set in a mathematical manner.
- Such an ICP scheme offers high accuracy, but requires a large amount of matching time for processing data especially when the amount of data to be processed is intensive, for example, in 3D plane matching.
- an apparatus and method for estimating a position capable of reducing the amount of data to be computed for position estimation while maintaining the accuracy of the position estimation, thereby reducing the time required for position estimation.
- an apparatus estimating a position of a mobile robot including a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data, a storage unit configured to store a plurality of patches, each stored patch including points around a feature point which is extracted from previously acquired 3D point cloud data, and a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of stored patches from the acquired 3D point cloud data.
- a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data
- a storage unit configured to store a plurality of patches, each stored patch including points around a feature point which is extracted from previously acquired 3D point cloud data
- a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of stored patches from the acquired 3D point cloud data.
- the apparatus may further include a patch generating unit, configured to extract at least one feature point from the previously acquired 3D point cloud data, generate a patch including the at least one feature point and points around the extracted feature point and store the generated patch in the storage unit as a stored patch.
- a patch generating unit configured to extract at least one feature point from the previously acquired 3D point cloud data, generate a patch including the at least one feature point and points around the extracted feature point and store the generated patch in the storage unit as a stored patch.
- the patch generating unit may calculate normal vectors with respect to respective points of the previously acquired 3D point cloud data, convert the normal vector to an RGB image by setting 3D spatial coordinates (x, y, z) forming the normal vector to individual RGB values, convert the converted RGB image to a gray image, extract corner points from the gray image by use of a corner extraction algorithm, extract a feature point from the extracted corner points, and generate the patch as including the extracted feature point and points around the extracted feature point.
- the patch generating unit may store the generated patch together with position information of the extracted feature point of the generated patch.
- the patch generating unit may store points forming the generated patch in the storage unit such that the points forming the patch are stored as divided points of edge points forming an edge and normal points not forming an edge.
- the position estimating unit may calculate normal vectors with respect to respective points of the 3D point cloud data, divide the respective points into edge points forming an edge and normal points not forming an edge by use of the normal vectors, and track the stored patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the stored patch is matched to the edge point of the 3D point cloud data and the normal point of the stored patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- the position estimating unit may match the edge point of the stored patch to a closest edge point of the 3D point cloud data, and matches the normal point of the stored patch to a closest point of the 3D point cloud data.
- a method of estimating a position of a mobile including acquiring three-dimensional (3D) point cloud data, and estimating the position of the mobile robot by tracking a plurality of stored patches from the acquired 3D point cloud data, the plurality of stored patches each including respective feature points and respective points around each respective feature point extracted from previously acquired 3D point cloud data.
- FIG. 1 illustrates an apparatus for estimating a position of a mobile robot, according to one or more embodiments
- FIG. 2 illustrates a method of estimating a position of a mobile robot, according to one or more embodiments
- FIG. 3 illustrates an estimating of a position by use of a registered patch, according to one or more embodiments
- FIG. 4 illustrates a method of generating a patch, according to one or more embodiments
- FIG. 5 illustrates a method of estimating a position based on a patch tracking, according to one or more embodiments
- FIG. 6A illustrates three-dimensional (3D) point cloud data acquired by a position estimating apparatus, for example, and a previously registered patch, according to one or more embodiments;
- FIG. 6B illustrates patch tracking through a general iterative closest point (ICP).
- FIG. 6C illustrates patch tracking through an edge-based ICP, according to one or more embodiments.
- FIG. illustrates an apparatus for estimating a position of a mobile robot, according to one or more embodiments.
- a position estimating apparatus 100 includes a moving unit 110 , a sensor unit 120 , a range data acquisition unit 130 , a control unit 140 , and a storage unit 150 , for example.
- a moving unit 110 includes a moving unit 110 , a sensor unit 120 , a range data acquisition unit 130 , a control unit 140 , and a storage unit 150 , for example.
- the following description will be made on the assumption that the position estimating apparatus 100 is a mobile robot, noting that alternative embodiments are equally available.
- the moving unit 110 may include moving machinery such as a plurality of wheels for moving the mobile robot and a driving source for providing a driving force for the moving machinery.
- the sensor unit 120 is mounted on the mobile robot 100 to sense the amount of movement of the mobile robot 100 .
- the sensor unit 120 may include an encoder or a gyrosensor.
- the gyrosensor senses the rotation angle of the mobile robot, and the encoder enables a travelling path of the mobile robot 100 to be recognized.
- the moving distance and direction of the mobile robot 100 achieved by the encoder are integrated to estimate the current position and directional angle of the mobile robot 101 on a two-dimensional (2D) coordinate system.
- the encoder provides precise measurement for a short path, but the error of measurement accumulates with an increase of the path requiring calculation of integrals.
- the sensor unit 120 may further include an infrared sensor, a laser sensor or an ultrasonic sensor for sensing obstacle related information used to build an obstacle map.
- the range data acquisition unit 130 measures range data of a three-dimensional (3D) environment by processing scan data that is obtained by scanning a 3D environment.
- the range data acquisition unit 120 may include a sensor system using laser structured light for recognizing a 3D environment to sense and measure range data.
- the range data acquisition unit 130 includes a 3D range sensor to acquire 3D range information R[r, ⁇ , ⁇ ].
- the range data acquisition unit 130 converts the 3D range information R[r, ⁇ , ⁇ ] to a 3D point cloud represented as P[x, y, z], where x is equal to r*cos ⁇ *cos ⁇ , y is equal to r* cos ⁇ *sin ⁇ , and z is equal to r*sin ⁇ , for example.
- the control unit 140 is configured to control an overall operation of the mobile robot, for example.
- the control unit 140 includes a path generating unit 142 , a position estimating unit 144 , and a path generating unit 146 , for example.
- the path generating unit 142 extracts at least one feature point from 3D point cloud data, generates a patch including points around the extracted feature point and stores the generated patch in the storage unit 150 .
- a feature point and a feature point descriptor are generated from an image obtained through an image sensor, by use of a feature point extracting algorithm such as a scale-invariant feature transform (SIFT), a maximally stable extremal region (MSER), or a Harris corner detector and used for position estimation.
- SIFT scale-invariant feature transform
- MSER maximally stable extremal region
- Harris corner detector used for position estimation.
- the patch including points around the feature point is used for position estimation.
- the patch may be provided in various 3D shapes.
- the patch may be formed of points that are included in a regular hexahedron having a feature point of 3D cloud data as the center.
- the patch generating unit 142 calculates normal vectors with respect to respective points of 3D point cloud data.
- the patch generating unit 142 converts the normal vector to an RGB image by setting 3D spatial coordinates (x, y, z), forming the normal vector, to individual RGB values.
- the patch generating unit 142 converts the converted RGB image to a gray image, extracts corner points from the gray image by use of a corner extraction algorithm, and extracts a feature point from the extracted corner points.
- the feature point represents a point capable of specifying a predetermined shape, such as an edge or a corner of an object.
- the feature point may be a point positioned in the middle of the generated patch.
- the patch generating unit 142 may store the patch together with position information of the feature point of the patch.
- the patch generating unit 142 may store the points forming the patch in the storage unit 140 to be divided into edge points forming an edge and normal points not forming an edge.
- the position estimating unit 144 may estimate the position of the mobile robot 100 by use of a standard value corresponding to the starting position and directional angle from which it starts.
- the estimating of the position of the mobile robot 100 may represent estimating the position and directional angle of the mobile robot 100 in a 2D plane.
- the patch including a feature point existing on a map may serve as a standard in position of the mobile robot. Accordingly, position information of the mobile robot 100 may include the position and directional angle with respect to a feature point recognized by the mobile robot 100 .
- the position estimating unit 144 estimates and recognizes the position of the mobile robot 100 by use of comprehensive information including odometry information, angular velocity, and acceleration acquired by the moving unit 110 and the sensor unit 120 .
- the position estimating unit 144 may perform position recognition at the same time of building up a map through a simultaneous localization and mapping (SLAM) by using the estimated position as an input.
- SLAM represents an algorithm which simultaneously performs localizing of a mobile robot and map building by repeating a process of building up a map of an environment of the mobile robot at a predetermined position and determining the next position of the mobile robot after travelling, based on the built up map.
- the position estimating unit 144 may use a Kalman Filter to extract new range information integrated with encoder information and gyrosensor information.
- a Kalman Filter includes predicting, in which the position is estimated based on a model, and updating, in which the estimated value is corrected through a sensor value.
- the position estimating unit 144 applies a preset model to a previously predicated value, thereby estimating an output for a given input.
- the position estimating unit 144 may predict the current position by use of previous position information and newly acquired information from the sensor unit 120 .
- the position estimating unit 144 may keep track of a plurality of patches from 3D point cloud data, which is newly obtained based on the predicted position, and may estimate a more accurate position by use of the tracked information.
- Previously stored patches each have a relative coordinate system and contain information used for conversion between a relative coordinate system and an absolute coordinate system.
- the position of the stored patch is converted to a relative position based on a coordinate system of the robot by use of the conversion information and predicted position information of the robot predicated during the above predicting process, and the difference between the relative position of the stored patch and the position of the tracked patch is calculated, thereby estimating the position of the robot.
- the position estimating unit 114 may remove an erroneously estimated result among the tracked patches.
- the position estimating unit 114 may use random sample consensus (RANSAC) or a joint compatibility branch and bound (JCBB).
- RANSAC random sample consensus
- JCBB joint compatibility branch and bound
- the position estimating unit 144 may be provided in various structures capable of performing position recognition and map building.
- the position estimating unit 144 may track a patch as follows.
- the patch may include points around a feature point which is extracted from previously acquired 3D point cloud data.
- the position estimating unit 144 may calculate normal vectors with respect to respective points of the 3D point cloud data, and use the normal vectors to divide the respective points of the 3D point cloud data into edge points forming an edge and normal points not forming an edge.
- the position estimating unit 144 tracks the patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the patch is matched to the edge point of the 3D point cloud data and the normal point of the patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- the position estimating unit 114 matches the edge point of the patch to the closest edge point of the 3D point cloud data, and matches the normal point of the patch to the closest point of the 3D point cloud data.
- the path generating unit 146 generates a path by use of position information of the mobile robot 100 that is recognized by the position estimating unit 144 .
- the storage unit 150 may store operating systems, applications, and data that are needed for the operation of the position estimating apparatus 100 .
- the storage unit 150 may include a patch storage unit 152 to store a plurality of patches each including points around a feature point, which is extracted from previously acquired 3D point cloud data.
- FIG. 2 illustrates a method of estimating a position of a mobile robot, according to one or more embodiments.
- the position estimating apparatus 100 acquires 3D point cloud data ( 210 ).
- the position estimating apparatus 100 estimates the position of the mobile robot 100 by tracking a plurality of patches from acquired 3D point cloud data.
- the plurality of patches each include points around a feature point which is extracted from previously acquired 3D point cloud data ( 220 ).
- the position estimating apparatus 100 may register additional patches by extracting a feature point from the acquired 3D point data and generating a patch including points around the extracted feature point.
- FIG. 3 illustrates an estimating of a position by use of a registered patch, according to one or more embodiments.
- a frame N 1 310 represents previously acquired 3D point cloud data
- a frame N 2 320 represents newly acquired current 3D point cloud point data.
- the position estimating apparatus 100 extracts a feature point from a region 311 of the frame N 1 310 , and stores a patch 333 including points around the extracted feature point in the patch storage unit 152 .
- the patch storage unit 152 may store a plurality of patches 331 , 332 , and 333 including the patch 333 through registration.
- the position estimating apparatus 100 acquires the frame N 2 320 , the registered patches 331 , 332 , and 333 are tracked from the frame N 2 320 . For example, if the patch 333 is tracked from a region 321 of the frame N 2 320 , a relative position of the patch 333 on the frame N 2 320 is identified. Accordingly, the position estimating apparatus 100 estimates an absolute position of the position estimating apparatus 100 by use of the relative position of the patch 333 identified on the frame N 2 320 .
- FIG. 4 illustrates a method of generating a patch, according to one or more embodiments. Though the path generating method will be described with reference to FIG. 4 in conjunction with FIG. 1 , embodiments are not intended to be limited to the same.
- the patch generating unit 142 calculates normal vectors with respect to respective points of previously acquired 3D point cloud data ( 410 ).
- the patch generating unit 142 converts each normal vector to an RGB image by setting 3D spatial coordinates (x, y, z), forming the normal vector, to individual RGB values ( 420 ). Accordingly, R (Red), G (Green) and B (Blue) of the generated RGB image each represent respective directions of the normal vector on each point.
- the patch generating unit 142 converts the converted RGB image to a gray image ( 430 ).
- the patch generating unit 142 extracts corner points from the gray image by use of a corner extraction algorithm ( 440 ). Corner points may be extracted from the gray image through various methods such as the generally known Harris corner detection. Due to noise, a predetermined point may be erroneously extracted as an actual corner point. Accordingly, the patch generating unit 142 determines whether the extracted corner point is an eligible feature point, and extracts a corner point determined as a feature point ( 450 ). For example, the patch generating unit 142 may determine that the extracted corner point is not an eligible feature point if points around the extracted corner point have a gradient of normal vector below a predetermined level, or determine that the extracted corner point is an eligible feature point if points around the extracted corner point have a gradient of normal vector meeting a predetermined level.
- Points around the extracted feature point are determined as a patch and stored in the storage unit 150 .
- the patch may be stored together with position information about the extracted feature point.
- the patch is stored such that the points forming the patch are divided into edge points forming an edge and normal points not forming an edge. This reduces the calculation time required for performing an edge based ICP that is used to track a patch.
- FIG. 5 illustrates a method of estimating the position based on patch tracking, according to one or more embodiments. Though the position estimating method will be described with reference to FIG. 5 in conjunction with FIG. 1 , embodiments are not intended to be limited to the same.
- the position estimating unit 144 calculates normal vectors at respective points of 3D point cloud data ( 510 ).
- the position estimating unit 144 divides the respective points of the 3D point cloud data into edge points forming an edge or normal points not forming an edge by use of the normal vector ( 520 ). For example, points having a change in normal vector exceeding a predetermined level in a predetermined place are determined as edge points.
- the position estimating unit 144 keeps tracking a patch from 3D point cloud data by use of an edge-based ICP.
- the edge point of the patch is matched to the edge point of the 3D point cloud and the normal point of the patch is matched to one of the edge point and the normal point of the three dimensional 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- the position estimating unit 144 divides points forming the tracked patches into edge points and normal points and performs matching between the tracked patch and the 3D cloud data.
- Points forming the patch may be divided into edge points and normal points similar to the method of dividing the points of the 3D point cloud data into edge points and normal points.
- a process of distinguishing the points forming the patch between edge points and normal points may be omitted in the position estimating, thereby reducing the time required for the position estimating.
- FIG. 6A illustrates an example of 3D point cloud data acquired by a position estimating apparatus and a previously registered patch, according to one or more embodiments.
- reference numeral 610 denotes 3D point cloud data schematically represented in a 2D form.
- Reference numeral 620 denotes a previously registered patch that is schematically represented in a 2D form.
- Points 611 , 612 , and 613 of the 3D point cloud data 610 represent edge points, and the remaining points represent normal points.
- a point 621 of the patch 620 represents an edge point and the remaining points represent normal points.
- FIG. 6B illustrates a patch tracking by use of a general ICP.
- a general ICP matching between acquired 3D point cloud data and previously registered 3D point cloud data is performed without discriminating between a normal point and an edge point. Accordingly, as shown in step 1 of FIG. 6B , if a general ICP is applied to matching between 3D point cloud data and a patch, the matching is performed based on the closest point between the 3D point cloud data and the patch. This leads to erroneous matching shown in step 2 of FIG. 6B . Such erroneous matching may be caused by partial occlusion of the 3D point cloud data due to the direction of a camera.
- FIG. 6C illustrates a patch tracking by use of an edge-based ICP, according to one or more embodiments.
- normal vectors are calculated at respective points of acquired 3D point cloud data and respective points forming a patch.
- the points forming the acquired 3D point cloud data and the points forming the patch are divided into edge points forming an edge and normal points not forming an edge by use of the normal vector.
- the position estimating unit 144 matches the edge point of the patch to the edge point of the 3D point cloud data and matches the normal point of the patch to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- edge points are first matched to be closest, and in step 2 , normal points are then matched to be closest.
- the edge points of the patch are matched to the edge points of the 3D point cloud data.
- the edge-based ICP algorithm provides a more accurate matching result than a general ICP algorithm. Accordingly, the amount of data calculation is reduced while maintaining the accuracy of the position estimating, thereby reducing the time required for the position estimating.
- embodiments can also be implemented through computer readable code/instructions in/on a non-transitory medium, e.g., a computer readable medium, to control at least one processing device, such as a processor or computer, to implement any above described embodiment.
- a non-transitory medium e.g., a computer readable medium
- the medium can correspond to any defined, measurable, and tangible structure permitting the storing and/or transmission of the computer readable code.
- the media may also include, e.g., in combination with the computer readable code, data files, data structures, and the like.
- One or more embodiments of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like.
- Computer readable code may include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter, for example.
- the media may also be a distributed network, so that the computer readable code is stored and executed in a distributed fashion.
- the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.
- the computer-readable media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA), which executes (processes like a processor) program instructions.
- ASIC application specific integrated circuit
- FPGA Field Programmable Gate Array
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
An apparatus and method for estimating the position of a mobile robot capable of reducing the time required to estimate the position is provided. The mobile robot position estimating apparatus includes a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data, a storage unit configured to store a plurality of patches, each including points around a feature point which is extracted from previously acquired 3D point cloud data, and a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of patches from the acquired 3D point cloud data.
Description
- This application claims the benefit under 35 U.S.C. §119(a) of Korean Patent Application No. 10-2010-0016812, filed on Feb. 24, 2010, the disclosure of which is incorporated by reference in its entirety for all purposes.
- 1. Field
- One or more embodiments relate to an apparatus and method for estimating a position of a mobile robot, and particularly, to an apparatus and method for estimating a position of a mobile robot, in which the mobile robot estimates its own position by use of a distance sensor.
- 2. Description of the Related Art
- With the development of new technologies in optics and electronics, more cost effective and accurate laser scanning systems have been implemented. According to a laser scanning system, depth information of an object may be directly obtained, thereby simplifying range image analysis and providing a wide range of applications. The range image is formed of a set of three-dimensional (3D) data points and represents the free surface of an object at different points of view.
- In recent years, the registration of a range image has a widely known problem in association with a machine vision. There have been numerous suggested approaches to solve this registration of range image problem, including using a scatter matrix, a geometric histogram, an iterative closest point (ICP), a graph matching, an external point, a range based searching, and an interactive method. Such a registration scheme is applied to various fields such as the object recognition, motion estimation, and scene understanding.
- As a representative example of the registration scheme, ICP has garnered a large amount of interest in the machine vision field since its inception. The purpose of ICP is to search a transformation matrix capable of matching a range data set of a range data coordinate system to a model data set in a mathematical manner. Such an ICP scheme offers high accuracy, but requires a large amount of matching time for processing data especially when the amount of data to be processed is intensive, for example, in 3D plane matching.
- According to one or more embodiments, there is provided an apparatus and method for estimating a position, capable of reducing the amount of data to be computed for position estimation while maintaining the accuracy of the position estimation, thereby reducing the time required for position estimation.
- According to one or more embodiments, there is provided an apparatus estimating a position of a mobile robot, the apparatus including a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data, a storage unit configured to store a plurality of patches, each stored patch including points around a feature point which is extracted from previously acquired 3D point cloud data, and a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of stored patches from the acquired 3D point cloud data.
- The apparatus may further include a patch generating unit, configured to extract at least one feature point from the previously acquired 3D point cloud data, generate a patch including the at least one feature point and points around the extracted feature point and store the generated patch in the storage unit as a stored patch.
- The patch generating unit may calculate normal vectors with respect to respective points of the previously acquired 3D point cloud data, convert the normal vector to an RGB image by setting 3D spatial coordinates (x, y, z) forming the normal vector to individual RGB values, convert the converted RGB image to a gray image, extract corner points from the gray image by use of a corner extraction algorithm, extract a feature point from the extracted corner points, and generate the patch as including the extracted feature point and points around the extracted feature point.
- The patch generating unit may store the generated patch together with position information of the extracted feature point of the generated patch.
- The patch generating unit may store points forming the generated patch in the storage unit such that the points forming the patch are stored as divided points of edge points forming an edge and normal points not forming an edge.
- The position estimating unit may calculate normal vectors with respect to respective points of the 3D point cloud data, divide the respective points into edge points forming an edge and normal points not forming an edge by use of the normal vectors, and track the stored patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the stored patch is matched to the edge point of the 3D point cloud data and the normal point of the stored patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- The position estimating unit may match the edge point of the stored patch to a closest edge point of the 3D point cloud data, and matches the normal point of the stored patch to a closest point of the 3D point cloud data.
- According to one or more embodiments, there is provided a method of estimating a position of a mobile, the method including acquiring three-dimensional (3D) point cloud data, and estimating the position of the mobile robot by tracking a plurality of stored patches from the acquired 3D point cloud data, the plurality of stored patches each including respective feature points and respective points around each respective feature point extracted from previously acquired 3D point cloud data.
- Additional aspects and/or advantages will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of one or more embodiments of the present invention.
- These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
-
FIG. 1 illustrates an apparatus for estimating a position of a mobile robot, according to one or more embodiments; -
FIG. 2 illustrates a method of estimating a position of a mobile robot, according to one or more embodiments; -
FIG. 3 illustrates an estimating of a position by use of a registered patch, according to one or more embodiments; -
FIG. 4 illustrates a method of generating a patch, according to one or more embodiments; -
FIG. 5 illustrates a method of estimating a position based on a patch tracking, according to one or more embodiments; -
FIG. 6A illustrates three-dimensional (3D) point cloud data acquired by a position estimating apparatus, for example, and a previously registered patch, according to one or more embodiments; -
FIG. 6B illustrates patch tracking through a general iterative closest point (ICP); and -
FIG. 6C illustrates patch tracking through an edge-based ICP, according to one or more embodiments. - Reference will now be made in detail to one or more embodiments, illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, embodiments of the present invention may be embodied in many different forms and should not be construed as being limited to embodiments set forth herein. Accordingly, embodiments are merely described below, by referring to the figures, to explain aspects of the present invention.
- FIG. illustrates an apparatus for estimating a position of a mobile robot, according to one or more embodiments.
- A position estimating apparatus 100 includes a moving unit 110, a sensor unit 120, a range data acquisition unit 130, a control unit 140, and a storage unit 150, for example. Hereinafter, the following description will be made on the assumption that the position estimating apparatus 100 is a mobile robot, noting that alternative embodiments are equally available.
- The moving unit 110 may include moving machinery such as a plurality of wheels for moving the mobile robot and a driving source for providing a driving force for the moving machinery.
- The sensor unit 120 is mounted on the mobile robot 100 to sense the amount of movement of the mobile robot 100. To this end, the sensor unit 120 may include an encoder or a gyrosensor. The gyrosensor senses the rotation angle of the mobile robot, and the encoder enables a travelling path of the mobile robot 100 to be recognized. In detail, the moving distance and direction of the mobile robot 100 achieved by the encoder are integrated to estimate the current position and directional angle of the mobile robot 101 on a two-dimensional (2D) coordinate system. Conventionally, the encoder provides precise measurement for a short path, but the error of measurement accumulates with an increase of the path requiring calculation of integrals. Meanwhile, the sensor unit 120 may further include an infrared sensor, a laser sensor or an ultrasonic sensor for sensing obstacle related information used to build an obstacle map.
- The range data acquisition unit 130 measures range data of a three-dimensional (3D) environment by processing scan data that is obtained by scanning a 3D environment. The range data acquisition unit 120 may include a sensor system using laser structured light for recognizing a 3D environment to sense and measure range data.
- In an embodiment, the range data acquisition unit 130 includes a 3D range sensor to acquire 3D range information R[r, θ, ψ]. The range data acquisition unit 130 converts the 3D range information R[r, θ, ψ] to a 3D point cloud represented as P[x, y, z], where x is equal to r*cosψ*cosθ, y is equal to r* cosψ*sinθ, and z is equal to r*sinψ, for example.
- The control unit 140 is configured to control an overall operation of the mobile robot, for example. The control unit 140 includes a path generating unit 142, a position estimating unit 144, and a path generating unit 146, for example.
- The path generating unit 142 extracts at least one feature point from 3D point cloud data, generates a patch including points around the extracted feature point and stores the generated patch in the storage unit 150. In general, a feature point and a feature point descriptor are generated from an image obtained through an image sensor, by use of a feature point extracting algorithm such as a scale-invariant feature transform (SIFT), a maximally stable extremal region (MSER), or a Harris corner detector and used for position estimation. As an example, the patch including points around the feature point is used for position estimation. The patch may be provided in various 3D shapes. For example, the patch may be formed of points that are included in a regular hexahedron having a feature point of 3D cloud data as the center.
- The patch generating unit 142 calculates normal vectors with respect to respective points of 3D point cloud data. The patch generating unit 142 converts the normal vector to an RGB image by setting 3D spatial coordinates (x, y, z), forming the normal vector, to individual RGB values. After that, the patch generating unit 142 converts the converted RGB image to a gray image, extracts corner points from the gray image by use of a corner extraction algorithm, and extracts a feature point from the extracted corner points. The feature point represents a point capable of specifying a predetermined shape, such as an edge or a corner of an object.
- Since the patch generating unit 142 generates a patch using points existing around a feature point in a 3D space, the feature point may be a point positioned in the middle of the generated patch.
- The patch generating unit 142 may store the patch together with position information of the feature point of the patch. In addition, the patch generating unit 142 may store the points forming the patch in the storage unit 140 to be divided into edge points forming an edge and normal points not forming an edge.
- The position estimating unit 144 may estimate the position of the mobile robot 100 by use of a standard value corresponding to the starting position and directional angle from which it starts. The estimating of the position of the mobile robot 100 may represent estimating the position and directional angle of the mobile robot 100 in a 2D plane. The patch including a feature point existing on a map may serve as a standard in position of the mobile robot. Accordingly, position information of the mobile robot 100 may include the position and directional angle with respect to a feature point recognized by the mobile robot 100.
- The position estimating unit 144 estimates and recognizes the position of the mobile robot 100 by use of comprehensive information including odometry information, angular velocity, and acceleration acquired by the moving unit 110 and the sensor unit 120. In addition, the position estimating unit 144 may perform position recognition at the same time of building up a map through a simultaneous localization and mapping (SLAM) by using the estimated position as an input. SLAM represents an algorithm which simultaneously performs localizing of a mobile robot and map building by repeating a process of building up a map of an environment of the mobile robot at a predetermined position and determining the next position of the mobile robot after travelling, based on the built up map.
- As an example, the position estimating unit 144 may use a Kalman Filter to extract new range information integrated with encoder information and gyrosensor information. A Kalman Filter includes predicting, in which the position is estimated based on a model, and updating, in which the estimated value is corrected through a sensor value.
- In the predicting, the position estimating unit 144 applies a preset model to a previously predicated value, thereby estimating an output for a given input.
- In the predicting process, the position estimating unit 144 may predict the current position by use of previous position information and newly acquired information from the sensor unit 120. In the updating, the position estimating unit 144 may keep track of a plurality of patches from 3D point cloud data, which is newly obtained based on the predicted position, and may estimate a more accurate position by use of the tracked information. Previously stored patches each have a relative coordinate system and contain information used for conversion between a relative coordinate system and an absolute coordinate system. Accordingly, the position of the stored patch is converted to a relative position based on a coordinate system of the robot by use of the conversion information and predicted position information of the robot predicated during the above predicting process, and the difference between the relative position of the stored patch and the position of the tracked patch is calculated, thereby estimating the position of the robot.
- In addition, the position estimating unit 114 may remove an erroneously estimated result among the tracked patches. To this end, the position estimating unit 114 may use random sample consensus (RANSAC) or a joint compatibility branch and bound (JCBB). Further, the position estimating unit 144 may be provided in various structures capable of performing position recognition and map building.
- The position estimating unit 144 may track a patch as follows. The patch may include points around a feature point which is extracted from previously acquired 3D point cloud data. First, the position estimating unit 144 may calculate normal vectors with respect to respective points of the 3D point cloud data, and use the normal vectors to divide the respective points of the 3D point cloud data into edge points forming an edge and normal points not forming an edge. The position estimating unit 144 tracks the patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the patch is matched to the edge point of the 3D point cloud data and the normal point of the patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data. The position estimating unit 114 matches the edge point of the patch to the closest edge point of the 3D point cloud data, and matches the normal point of the patch to the closest point of the 3D point cloud data.
- The path generating unit 146 generates a path by use of position information of the mobile robot 100 that is recognized by the position estimating unit 144.
- In an embodiment, the storage unit 150 may store operating systems, applications, and data that are needed for the operation of the position estimating apparatus 100. In addition, the storage unit 150 may include a
patch storage unit 152 to store a plurality of patches each including points around a feature point, which is extracted from previously acquired 3D point cloud data. -
FIG. 2 illustrates a method of estimating a position of a mobile robot, according to one or more embodiments. - The position estimating apparatus 100 acquires 3D point cloud data (210).
- The position estimating apparatus 100 estimates the position of the mobile robot 100 by tracking a plurality of patches from acquired 3D point cloud data. The plurality of patches each include points around a feature point which is extracted from previously acquired 3D point cloud data (220).
- If it is determined that an additional patch is needed, for example, if the number of registered patches is below a preset number or the number of tracked patches is below a preset number (230), the position estimating apparatus 100 may register additional patches by extracting a feature point from the acquired 3D point data and generating a patch including points around the extracted feature point.
-
FIG. 3 illustrates an estimating of a position by use of a registered patch, according to one or more embodiments. - In
FIG. 3 , aframe N1 310 represents previously acquired 3D point cloud data, and aframe N2 320 represents newly acquired current 3D point cloud point data. - The position estimating apparatus 100 extracts a feature point from a
region 311 of theframe N1 310, and stores apatch 333 including points around the extracted feature point in thepatch storage unit 152. Thepatch storage unit 152 may store a plurality ofpatches patch 333 through registration. - If the position estimating apparatus 100 acquires the
frame N2 320, the registeredpatches frame N2 320. For example, if thepatch 333 is tracked from aregion 321 of theframe N2 320, a relative position of thepatch 333 on theframe N2 320 is identified. Accordingly, the position estimating apparatus 100 estimates an absolute position of the position estimating apparatus 100 by use of the relative position of thepatch 333 identified on theframe N2 320. -
FIG. 4 illustrates a method of generating a patch, according to one or more embodiments. Though the path generating method will be described with reference toFIG. 4 in conjunction withFIG. 1 , embodiments are not intended to be limited to the same. - The patch generating unit 142 calculates normal vectors with respect to respective points of previously acquired 3D point cloud data (410).
- The patch generating unit 142 converts each normal vector to an RGB image by setting 3D spatial coordinates (x, y, z), forming the normal vector, to individual RGB values (420). Accordingly, R (Red), G (Green) and B (Blue) of the generated RGB image each represent respective directions of the normal vector on each point.
- The patch generating unit 142 converts the converted RGB image to a gray image (430).
- The patch generating unit 142 extracts corner points from the gray image by use of a corner extraction algorithm (440). Corner points may be extracted from the gray image through various methods such as the generally known Harris corner detection. Due to noise, a predetermined point may be erroneously extracted as an actual corner point. Accordingly, the patch generating unit 142 determines whether the extracted corner point is an eligible feature point, and extracts a corner point determined as a feature point (450). For example, the patch generating unit 142 may determine that the extracted corner point is not an eligible feature point if points around the extracted corner point have a gradient of normal vector below a predetermined level, or determine that the extracted corner point is an eligible feature point if points around the extracted corner point have a gradient of normal vector meeting a predetermined level.
- Points around the extracted feature point are determined as a patch and stored in the storage unit 150. In this case, the patch may be stored together with position information about the extracted feature point. In addition, the patch is stored such that the points forming the patch are divided into edge points forming an edge and normal points not forming an edge. This reduces the calculation time required for performing an edge based ICP that is used to track a patch.
-
FIG. 5 illustrates a method of estimating the position based on patch tracking, according to one or more embodiments. Though the position estimating method will be described with reference toFIG. 5 in conjunction withFIG. 1 , embodiments are not intended to be limited to the same. - The position estimating unit 144 calculates normal vectors at respective points of 3D point cloud data (510).
- The position estimating unit 144 divides the respective points of the 3D point cloud data into edge points forming an edge or normal points not forming an edge by use of the normal vector (520). For example, points having a change in normal vector exceeding a predetermined level in a predetermined place are determined as edge points.
- The position estimating unit 144 keeps tracking a patch from 3D point cloud data by use of an edge-based ICP. According to the edge-based ICP algorithm, the edge point of the patch is matched to the edge point of the 3D point cloud and the normal point of the patch is matched to one of the edge point and the normal point of the three dimensional 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- Accordingly, the position estimating unit 144 divides points forming the tracked patches into edge points and normal points and performs matching between the tracked patch and the 3D cloud data. Points forming the patch may be divided into edge points and normal points similar to the method of dividing the points of the 3D point cloud data into edge points and normal points. In addition, as described with reference to
FIG. 4 , if the patch is stored such that the points forming the corresponding patch are divided into edge points and normal points, a process of distinguishing the points forming the patch between edge points and normal points may be omitted in the position estimating, thereby reducing the time required for the position estimating. -
FIG. 6A illustrates an example of 3D point cloud data acquired by a position estimating apparatus and a previously registered patch, according to one or more embodiments. - In
FIG. 6A ,reference numeral 610 denotes 3D point cloud data schematically represented in a 2D form.Reference numeral 620 denotes a previously registered patch that is schematically represented in a 2D form.Points point cloud data 610 represent edge points, and the remaining points represent normal points. Apoint 621 of thepatch 620 represents an edge point and the remaining points represent normal points. -
FIG. 6B illustrates a patch tracking by use of a general ICP. - According to a general ICP, matching between acquired 3D point cloud data and previously registered 3D point cloud data is performed without discriminating between a normal point and an edge point. Accordingly, as shown in
step 1 ofFIG. 6B , if a general ICP is applied to matching between 3D point cloud data and a patch, the matching is performed based on the closest point between the 3D point cloud data and the patch. This leads to erroneous matching shown instep 2 ofFIG. 6B . Such erroneous matching may be caused by partial occlusion of the 3D point cloud data due to the direction of a camera. -
FIG. 6C illustrates a patch tracking by use of an edge-based ICP, according to one or more embodiments. - According to the edge-based ICP, normal vectors are calculated at respective points of acquired 3D point cloud data and respective points forming a patch. The points forming the acquired 3D point cloud data and the points forming the patch are divided into edge points forming an edge and normal points not forming an edge by use of the normal vector. The position estimating unit 144 matches the edge point of the patch to the edge point of the 3D point cloud data and matches the normal point of the patch to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
- Accordingly, as shown in
step 1 ofFIG. 6C , edge points are first matched to be closest, and instep 2, normal points are then matched to be closest. As a result, the edge points of the patch are matched to the edge points of the 3D point cloud data. As described above, the edge-based ICP algorithm provides a more accurate matching result than a general ICP algorithm. Accordingly, the amount of data calculation is reduced while maintaining the accuracy of the position estimating, thereby reducing the time required for the position estimating. - In addition to the above described embodiments, embodiments can also be implemented through computer readable code/instructions in/on a non-transitory medium, e.g., a computer readable medium, to control at least one processing device, such as a processor or computer, to implement any above described embodiment. The medium can correspond to any defined, measurable, and tangible structure permitting the storing and/or transmission of the computer readable code.
- The media may also include, e.g., in combination with the computer readable code, data files, data structures, and the like. One or more embodiments of computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM disks and DVDs; magneto-optical media such as optical disks; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Computer readable code may include both machine code, such as produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter, for example. The media may also be a distributed network, so that the computer readable code is stored and executed in a distributed fashion. Still further, as only an example, the processing element could include a processor or a computer processor, and processing elements may be distributed and/or included in a single device.
- The computer-readable media may also be embodied in at least one application specific integrated circuit (ASIC) or Field Programmable Gate Array (FPGA), which executes (processes like a processor) program instructions.
- While aspects of the present invention has been particularly shown and described with reference to differing embodiments thereof, it should be understood that these embodiments should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in the remaining embodiments. Suitable results may equally be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents.
- Thus, although a few embodiments have been shown and described, with additional embodiments being equally available, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (14)
1. An apparatus estimating a position of a mobile robot, the apparatus comprising:
a range data acquisition unit configured to acquire three-dimensional (3D) point cloud data;
a storage unit configured to store a plurality of patches, each stored patch including points around a feature point which is extracted from previously acquired 3D point cloud data; and
a position estimating unit configured to estimate the position of the mobile robot by tracking the plurality of stored patches from the acquired 3D point cloud data.
2. The apparatus of claim 1 , further comprising
a patch generating unit configured to extract at least one feature point from the previously acquired 3D point cloud data, generate a patch including the at least one feature point and points around the extracted at least one feature point and store the generated patch in the storage unit as a stored patch.
3. The apparatus of claim 2 , wherein the patch generating unit calculates normal vectors with respect to respective points of the previously acquired 3D point cloud data, converts the normal vector to an RGB image by setting 3D spatial coordinates (x, y, z) forming the normal vector to individual RGB values, converts the converted RGB image to a gray image, extracts corner points from the gray image by use of a corner extraction algorithm, extracts a feature point from the extracted corner points and generates the patch as including the extracted feature point and points around the extracted feature point.
4. The apparatus of claim 4 , wherein the patch generating unit stores the generated patch together with position information of the extracted feature point of the generated patch.
5. The apparatus of claim 2 , wherein the patch generating unit stores points forming the generated patch in the storage unit such that the points forming the patch are stored as divided points of edge points forming an edge and normal points not forming an edge.
6. The apparatus of claim 5 , wherein the position estimating unit calculates normal vectors with respect to respective points of the 3D point cloud data, divides the respective points into edge points forming an edge and normal points not forming an edge by use of the normal vectors, and tracks the stored patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the stored patch is matched to the edge point of the 3D point cloud data and the normal point of the stored patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
7. The apparatus of claim 6 , wherein the position estimating unit matches the edge point of the stored patch to a closest edge point of the 3D point cloud data, and matches the normal point of the stored patch to a closest point of the 3D point cloud data.
8. A method of estimating a position of a mobile robot, the method comprising:
acquiring three-dimensional (3D) point cloud data; and
estimating the position of the mobile robot by tracking a plurality of stored patches from the acquired 3D point cloud data, the plurality of stored patches each including respective feature points and respective points around each respective feature point extracted from previously acquired 3D point cloud data.
9. The method of claim 8 , further comprising generating a plurality of 3D point cloud patches, including:
extracting at least one feature point from the previously acquired 3D point cloud data;
generating a patch including points around the extracted at least one feature point; and
storing the generated patch as a stored patch.
10. The method of claim 9 , wherein the generating of the plurality of 3D cloud patches comprises:
calculating normal vectors with respect to respective points of the previously acquired 3D point cloud data;
converting the normal vectors to an RGB image by setting 3D spatial coordinates (x, y, z) forming the normal vectors to individual RGB values;
converting the converted RGB image to a gray image, and extracting corner points from the gray image by use of a corner extraction algorithm; and extracting a feature point from the extracted corner points.
11. The method of claim 9 , wherein, in the storing of the generated patch, the stored patch is stored together with position information of the feature point of the stored patch.
12. The method of claim 9 , wherein, in the storing of the generated patch, points forming the stored patch are stored as divided points of edge points forming an edge and normal points not forming an edge.
13. The method of claim 12 , wherein the estimating of the position comprises:
calculating normal vectors with respect to respective points of the 3D point cloud data;
dividing the respective points into edge points forming an edge and normal points not forming an edge by use of the normal vectors; and
tracking the stored patch from the 3D point cloud data by use of an edge-based IPC-based algorithm in which the edge point of the stored patch is matched to the edge point of the 3D point cloud data and the normal point of the stored patch is matched to one of the edge point and the normal point of the 3D point cloud data without discriminating between the edge point and the normal point of the 3D point cloud data.
14. The method of claim 13 , wherein the tracking of the stored patch comprises:
matching the edge point of the stored patch to a closest edge point of the 3D point cloud data; and
matching the normal point of the stored patch to a closest point of the 3D point cloud data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020100016812A KR20110097140A (en) | 2010-02-24 | 2010-02-24 | Apparatus for estimating location of moving robot and method thereof |
KR10-2010-0016812 | 2010-02-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110205338A1 true US20110205338A1 (en) | 2011-08-25 |
Family
ID=44476170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/929,414 Abandoned US20110205338A1 (en) | 2010-02-24 | 2011-01-21 | Apparatus for estimating position of mobile robot and method thereof |
Country Status (2)
Country | Link |
---|---|
US (1) | US20110205338A1 (en) |
KR (1) | KR20110097140A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120307027A1 (en) * | 2010-01-08 | 2012-12-06 | Koninklijke Philips Electronics N.V. | Uncalibrated visual servoing using real-time velocity optimization |
US20130301932A1 (en) * | 2010-07-06 | 2013-11-14 | Ltu Technologies | Method and apparatus for obtaining a symmetry invariant descriptor from a visual patch of an image |
US20140233790A1 (en) * | 2013-02-19 | 2014-08-21 | Caterpillar Inc. | Motion estimation systems and methods |
CN104395932A (en) * | 2012-06-29 | 2015-03-04 | 三菱电机株式会社 | Method for registering data |
JP2015524560A (en) * | 2012-07-18 | 2015-08-24 | クレアフォーム・インコーポレイテッドCreaform Inc. | 3D scanning and positioning interface |
US9170334B2 (en) | 2011-09-30 | 2015-10-27 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising transportable apparatus |
US20150371110A1 (en) * | 2014-06-20 | 2015-12-24 | Samsung Electronics Co., Ltd. | Method and apparatus for extracting feature regions from point cloud |
CN105761245A (en) * | 2016-01-29 | 2016-07-13 | 速感科技(北京)有限公司 | Automatic tracking method and device based on visual feature points |
US9464894B2 (en) | 2011-09-30 | 2016-10-11 | Bae Systems Plc | Localising a vehicle along a route |
US20160328862A1 (en) * | 2015-05-06 | 2016-11-10 | Korea University Research And Business Foundation | Method for extracting outer space feature information from spatial geometric data |
US20170039436A1 (en) * | 2015-08-03 | 2017-02-09 | Nokia Technologies Oy | Fusion of RGB Images and Lidar Data for Lane Classification |
US9582707B2 (en) | 2011-05-17 | 2017-02-28 | Qualcomm Incorporated | Head pose estimation using RGBD camera |
US20170186245A1 (en) * | 2015-12-24 | 2017-06-29 | Dassault Systemes | 3d object localization with descriptor |
CN107066926A (en) * | 2015-12-24 | 2017-08-18 | 达索系统公司 | Positioned using the 3D objects of descriptor |
US9816809B2 (en) | 2012-07-04 | 2017-11-14 | Creaform Inc. | 3-D scanning and positioning system |
US20180046885A1 (en) * | 2016-08-09 | 2018-02-15 | Cognex Corporation | Selection of balanced-probe sites for 3-d alignment algorithms |
US9983592B2 (en) | 2013-04-23 | 2018-05-29 | Samsung Electronics Co., Ltd. | Moving robot, user terminal apparatus and control method thereof |
US10070101B2 (en) | 2011-09-30 | 2018-09-04 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising transportable apparatus |
WO2019040997A1 (en) * | 2017-09-04 | 2019-03-07 | Commonwealth Scientific And Industrial Research Organisation | Method and system for use in performing localisation |
CN110049323A (en) * | 2018-01-17 | 2019-07-23 | 华为技术有限公司 | Coding method, coding/decoding method and device |
TWI675000B (en) * | 2019-03-22 | 2019-10-21 | 所羅門股份有限公司 | Object delivery method and system |
CN110603571A (en) * | 2017-04-26 | 2019-12-20 | Abb瑞士股份有限公司 | Robot system and method for operating a robot |
US20200097012A1 (en) * | 2018-09-20 | 2020-03-26 | Samsung Electronics Co., Ltd. | Cleaning robot and method for performing task thereof |
WO2020057338A1 (en) * | 2018-09-19 | 2020-03-26 | 华为技术有限公司 | Point cloud coding method and encoder |
WO2020063294A1 (en) * | 2018-09-30 | 2020-04-02 | 华为技术有限公司 | Point cloud encoding and decoding method and encoder/decoder |
WO2020187191A1 (en) * | 2019-03-19 | 2020-09-24 | 华为技术有限公司 | Point cloud encoding and decoding method and codec |
US20210078597A1 (en) * | 2019-05-31 | 2021-03-18 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for determining an orientation of a target object, method and apparatus for controlling intelligent driving control, and device |
US10964053B2 (en) * | 2018-07-02 | 2021-03-30 | Microsoft Technology Licensing, Llc | Device pose estimation using 3D line clouds |
US20220191511A1 (en) * | 2019-03-14 | 2022-06-16 | Nippon Telegraph And Telephone Corporation | Data compression apparatus, data compression method, and program |
US11638997B2 (en) * | 2018-11-27 | 2023-05-02 | Cloudminds (Beijing) Technologies Co., Ltd. | Positioning and navigation method for a robot, and computing device thereof |
CN116985141A (en) * | 2023-09-22 | 2023-11-03 | 深圳市协和传动器材有限公司 | Industrial robot intelligent control method and system based on deep learning |
US11875538B2 (en) | 2018-09-19 | 2024-01-16 | Huawei Technologies Co., Ltd. | Point cloud encoding method and encoder |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101901586B1 (en) | 2011-12-23 | 2018-10-01 | 삼성전자주식회사 | Apparatus for estimating the robot pose and method thereof |
KR101379732B1 (en) * | 2012-04-09 | 2014-04-03 | 전자부품연구원 | Apparatus and method for estimating gondola robot's position |
KR101325926B1 (en) * | 2012-05-22 | 2013-11-07 | 동국대학교 산학협력단 | 3d data processing apparatus and method for real-time 3d data transmission and reception |
US9420265B2 (en) | 2012-06-29 | 2016-08-16 | Mitsubishi Electric Research Laboratories, Inc. | Tracking poses of 3D camera using points and planes |
KR101490055B1 (en) * | 2013-10-30 | 2015-02-06 | 한국과학기술원 | Method for localization of mobile robot and mapping, and apparatuses operating the same |
KR101404655B1 (en) * | 2014-04-18 | 2014-06-09 | 국방과학연구소 | Power line extraction using eigenvalues ratio of 3d raw data of laser radar |
KR101878827B1 (en) * | 2016-11-30 | 2018-07-17 | 주식회사 유진로봇 | Obstacle Sensing Apparatus and Method for Multi-Channels Based Mobile Robot, Mobile Robot including the same |
KR101961171B1 (en) * | 2017-10-13 | 2019-03-22 | 한국과학기술연구원 | Self position detecting system of indoor moving robot and method for detecting self position using the same |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6072903A (en) * | 1997-01-07 | 2000-06-06 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
US20070118248A1 (en) * | 2005-11-23 | 2007-05-24 | Samsung Electronics Co., Ltd. | Method and apparatus for reckoning position of moving robot |
US20090210092A1 (en) * | 2008-02-15 | 2009-08-20 | Korea Institute Of Science And Technology | Method for self-localization of robot based on object recognition and environment information around recognized object |
US20090262974A1 (en) * | 2008-04-18 | 2009-10-22 | Erik Lithopoulos | System and method for obtaining georeferenced mapping data |
US20100086050A1 (en) * | 2004-05-04 | 2010-04-08 | University Technologies International Inc. | Mesh based frame processing and applications |
US7831094B2 (en) * | 2004-04-27 | 2010-11-09 | Honda Motor Co., Ltd. | Simultaneous localization and mapping using multiple view feature descriptors |
US20100324769A1 (en) * | 2007-02-13 | 2010-12-23 | Yutaka Takaoka | Environment map generating method and mobile robot (as amended) |
US20110282622A1 (en) * | 2010-02-05 | 2011-11-17 | Peter Canter | Systems and methods for processing mapping and modeling data |
US20120206596A1 (en) * | 2006-12-01 | 2012-08-16 | Sri International | Unified framework for precise vision-aided navigation |
-
2010
- 2010-02-24 KR KR1020100016812A patent/KR20110097140A/en not_active Application Discontinuation
-
2011
- 2011-01-21 US US12/929,414 patent/US20110205338A1/en not_active Abandoned
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6072903A (en) * | 1997-01-07 | 2000-06-06 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
US7831094B2 (en) * | 2004-04-27 | 2010-11-09 | Honda Motor Co., Ltd. | Simultaneous localization and mapping using multiple view feature descriptors |
US20100086050A1 (en) * | 2004-05-04 | 2010-04-08 | University Technologies International Inc. | Mesh based frame processing and applications |
US20070118248A1 (en) * | 2005-11-23 | 2007-05-24 | Samsung Electronics Co., Ltd. | Method and apparatus for reckoning position of moving robot |
US20120206596A1 (en) * | 2006-12-01 | 2012-08-16 | Sri International | Unified framework for precise vision-aided navigation |
US20100324769A1 (en) * | 2007-02-13 | 2010-12-23 | Yutaka Takaoka | Environment map generating method and mobile robot (as amended) |
US20090210092A1 (en) * | 2008-02-15 | 2009-08-20 | Korea Institute Of Science And Technology | Method for self-localization of robot based on object recognition and environment information around recognized object |
US20090262974A1 (en) * | 2008-04-18 | 2009-10-22 | Erik Lithopoulos | System and method for obtaining georeferenced mapping data |
US20110282622A1 (en) * | 2010-02-05 | 2011-11-17 | Peter Canter | Systems and methods for processing mapping and modeling data |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9205564B2 (en) | 2010-01-08 | 2015-12-08 | Koninklijke Philips N.V. | Uncalibrated visual servoing using real-time velocity optimization |
US8934003B2 (en) * | 2010-01-08 | 2015-01-13 | Koninklijkle Philips N.V. | Uncalibrated visual servoing using real-time velocity optimization |
US20120307027A1 (en) * | 2010-01-08 | 2012-12-06 | Koninklijke Philips Electronics N.V. | Uncalibrated visual servoing using real-time velocity optimization |
US20130301932A1 (en) * | 2010-07-06 | 2013-11-14 | Ltu Technologies | Method and apparatus for obtaining a symmetry invariant descriptor from a visual patch of an image |
US9251432B2 (en) * | 2010-07-06 | 2016-02-02 | Jastec Co. | Method and apparatus for obtaining a symmetry invariant descriptor from a visual patch of an image |
US9582707B2 (en) | 2011-05-17 | 2017-02-28 | Qualcomm Incorporated | Head pose estimation using RGBD camera |
US9464894B2 (en) | 2011-09-30 | 2016-10-11 | Bae Systems Plc | Localising a vehicle along a route |
US9170334B2 (en) | 2011-09-30 | 2015-10-27 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising transportable apparatus |
US10070101B2 (en) | 2011-09-30 | 2018-09-04 | The Chancellor Masters And Scholars Of The University Of Oxford | Localising transportable apparatus |
CN104395932A (en) * | 2012-06-29 | 2015-03-04 | 三菱电机株式会社 | Method for registering data |
US9183631B2 (en) * | 2012-06-29 | 2015-11-10 | Mitsubishi Electric Research Laboratories, Inc. | Method for registering points and planes of 3D data in multiple coordinate systems |
JP2015515655A (en) * | 2012-06-29 | 2015-05-28 | 三菱電機株式会社 | How to align data |
TWI569229B (en) * | 2012-06-29 | 2017-02-01 | 三菱電機股份有限公司 | Method for registering data |
US9816809B2 (en) | 2012-07-04 | 2017-11-14 | Creaform Inc. | 3-D scanning and positioning system |
US10928183B2 (en) | 2012-07-18 | 2021-02-23 | Creaform Inc. | 3-D scanning and positioning interface |
US10401142B2 (en) | 2012-07-18 | 2019-09-03 | Creaform Inc. | 3-D scanning and positioning interface |
JP2015524560A (en) * | 2012-07-18 | 2015-08-24 | クレアフォーム・インコーポレイテッドCreaform Inc. | 3D scanning and positioning interface |
US9305364B2 (en) * | 2013-02-19 | 2016-04-05 | Caterpillar Inc. | Motion estimation systems and methods |
US20140233790A1 (en) * | 2013-02-19 | 2014-08-21 | Caterpillar Inc. | Motion estimation systems and methods |
US9983592B2 (en) | 2013-04-23 | 2018-05-29 | Samsung Electronics Co., Ltd. | Moving robot, user terminal apparatus and control method thereof |
KR20150145950A (en) * | 2014-06-20 | 2015-12-31 | 삼성전자주식회사 | Method and apparatus for extracting feature regions in point cloud |
US20150371110A1 (en) * | 2014-06-20 | 2015-12-24 | Samsung Electronics Co., Ltd. | Method and apparatus for extracting feature regions from point cloud |
CN105205441A (en) * | 2014-06-20 | 2015-12-30 | 三星电子株式会社 | Method and apparatus for extracting feature regions from point cloud |
KR102238693B1 (en) | 2014-06-20 | 2021-04-09 | 삼성전자주식회사 | Method and apparatus for extracting feature regions in point cloud |
US9984308B2 (en) * | 2014-06-20 | 2018-05-29 | Samsung Electronics Co., Ltd. | Method and apparatus for extracting feature regions from point cloud |
US9727978B2 (en) * | 2015-05-06 | 2017-08-08 | Korea University Research And Business Foundation | Method for extracting outer space feature information from spatial geometric data |
US20160328862A1 (en) * | 2015-05-06 | 2016-11-10 | Korea University Research And Business Foundation | Method for extracting outer space feature information from spatial geometric data |
US9710714B2 (en) * | 2015-08-03 | 2017-07-18 | Nokia Technologies Oy | Fusion of RGB images and LiDAR data for lane classification |
US20170039436A1 (en) * | 2015-08-03 | 2017-02-09 | Nokia Technologies Oy | Fusion of RGB Images and Lidar Data for Lane Classification |
CN107066926A (en) * | 2015-12-24 | 2017-08-18 | 达索系统公司 | Positioned using the 3D objects of descriptor |
US10062217B2 (en) * | 2015-12-24 | 2018-08-28 | Dassault Systemes | 3D object localization with descriptor |
US20170186245A1 (en) * | 2015-12-24 | 2017-06-29 | Dassault Systemes | 3d object localization with descriptor |
CN105761245A (en) * | 2016-01-29 | 2016-07-13 | 速感科技(北京)有限公司 | Automatic tracking method and device based on visual feature points |
US20180046885A1 (en) * | 2016-08-09 | 2018-02-15 | Cognex Corporation | Selection of balanced-probe sites for 3-d alignment algorithms |
US10417533B2 (en) * | 2016-08-09 | 2019-09-17 | Cognex Corporation | Selection of balanced-probe sites for 3-D alignment algorithms |
CN110603571A (en) * | 2017-04-26 | 2019-12-20 | Abb瑞士股份有限公司 | Robot system and method for operating a robot |
WO2019040997A1 (en) * | 2017-09-04 | 2019-03-07 | Commonwealth Scientific And Industrial Research Organisation | Method and system for use in performing localisation |
US11402509B2 (en) | 2017-09-04 | 2022-08-02 | Commonwealth Scientific And Industrial Research Organisation | Method and system for use in performing localisation |
CN110278719A (en) * | 2018-01-17 | 2019-09-24 | 华为技术有限公司 | Coding method, coding/decoding method and device |
WO2019140973A1 (en) * | 2018-01-17 | 2019-07-25 | 华为技术有限公司 | Encoding method, decoding method, and device |
CN110049323A (en) * | 2018-01-17 | 2019-07-23 | 华为技术有限公司 | Coding method, coding/decoding method and device |
US11388446B2 (en) * | 2018-01-17 | 2022-07-12 | Huawei Technologies Co., Ltd. | Encoding method, decoding method, and apparatus |
US10964053B2 (en) * | 2018-07-02 | 2021-03-30 | Microsoft Technology Licensing, Llc | Device pose estimation using 3D line clouds |
WO2020057338A1 (en) * | 2018-09-19 | 2020-03-26 | 华为技术有限公司 | Point cloud coding method and encoder |
US11875538B2 (en) | 2018-09-19 | 2024-01-16 | Huawei Technologies Co., Ltd. | Point cloud encoding method and encoder |
US20200097012A1 (en) * | 2018-09-20 | 2020-03-26 | Samsung Electronics Co., Ltd. | Cleaning robot and method for performing task thereof |
WO2020063294A1 (en) * | 2018-09-30 | 2020-04-02 | 华为技术有限公司 | Point cloud encoding and decoding method and encoder/decoder |
US11638997B2 (en) * | 2018-11-27 | 2023-05-02 | Cloudminds (Beijing) Technologies Co., Ltd. | Positioning and navigation method for a robot, and computing device thereof |
US20220191511A1 (en) * | 2019-03-14 | 2022-06-16 | Nippon Telegraph And Telephone Corporation | Data compression apparatus, data compression method, and program |
US11818361B2 (en) * | 2019-03-14 | 2023-11-14 | Nippon Telegraph And Telephone Corporation | Data compression apparatus, data compression method, and program |
WO2020187191A1 (en) * | 2019-03-19 | 2020-09-24 | 华为技术有限公司 | Point cloud encoding and decoding method and codec |
TWI675000B (en) * | 2019-03-22 | 2019-10-21 | 所羅門股份有限公司 | Object delivery method and system |
US20210078597A1 (en) * | 2019-05-31 | 2021-03-18 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for determining an orientation of a target object, method and apparatus for controlling intelligent driving control, and device |
CN116985141A (en) * | 2023-09-22 | 2023-11-03 | 深圳市协和传动器材有限公司 | Industrial robot intelligent control method and system based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
KR20110097140A (en) | 2011-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110205338A1 (en) | Apparatus for estimating position of mobile robot and method thereof | |
CN108406731B (en) | Positioning device, method and robot based on depth vision | |
KR101776622B1 (en) | Apparatus for recognizing location mobile robot using edge based refinement and method thereof | |
CN110807350B (en) | System and method for scan-matching oriented visual SLAM | |
KR101708659B1 (en) | Apparatus for recognizing location mobile robot using search based correlative matching and method thereof | |
KR101725060B1 (en) | Apparatus for recognizing location mobile robot using key point based on gradient and method thereof | |
KR102054455B1 (en) | Apparatus and method for calibrating between heterogeneous sensors | |
KR101776621B1 (en) | Apparatus for recognizing location mobile robot using edge based refinement and method thereof | |
US8380384B2 (en) | Apparatus and method for localizing mobile robot | |
JP2017526082A (en) | Non-transitory computer-readable medium encoded with computer program code for causing a motion estimation method, a moving body, and a processor to execute the motion estimation method | |
US9679384B2 (en) | Method of detecting and describing features from an intensity image | |
KR101776620B1 (en) | Apparatus for recognizing location mobile robot using search based correlative matching and method thereof | |
KR101633620B1 (en) | Feature registration apparatus for image based localization and method the same | |
US11525923B2 (en) | Real-time three-dimensional map building method and device using three-dimensional lidar | |
Rodríguez Flórez et al. | Multi-modal object detection and localization for high integrity driving assistance | |
US8639021B2 (en) | Apparatus and method with composite sensor calibration | |
CN108038139B (en) | Map construction method and device, robot positioning method and device, computer equipment and storage medium | |
JP6782903B2 (en) | Self-motion estimation system, control method and program of self-motion estimation system | |
US10607350B2 (en) | Method of detecting and describing features from an intensity image | |
Fiala et al. | Visual odometry using 3-dimensional video input | |
Sehgal et al. | Real-time scale invariant 3D range point cloud registration | |
Huang et al. | Mobile robot localization using ceiling landmarks and images captured from an rgb-d camera | |
KR20210090384A (en) | Method and Apparatus for Detecting 3D Object Using Camera and Lidar Sensor | |
US20220291009A1 (en) | Information processing apparatus, information processing method, and storage medium | |
Cupec et al. | Global localization based on 3d planar surface segments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOI, KI-WAN;PARK, JI-YOUNG;LEE, HYOUNG-KI;REEL/FRAME:025718/0858 Effective date: 20110118 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |