WO2011158886A1 - Information processing apparatus and processing method thereof - Google Patents
Information processing apparatus and processing method thereof Download PDFInfo
- Publication number
- WO2011158886A1 WO2011158886A1 PCT/JP2011/063755 JP2011063755W WO2011158886A1 WO 2011158886 A1 WO2011158886 A1 WO 2011158886A1 JP 2011063755 W JP2011063755 W JP 2011063755W WO 2011158886 A1 WO2011158886 A1 WO 2011158886A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target object
- geometric features
- normals
- occlusion
- geometric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/42—Simultaneous measurement of distance and other co-ordinates
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
Definitions
- the present invention relates to an
- Assembling a product by a robot requires measuring the position, orientation, and three- dimensional shape of a target component.
- correspondences are minimized to align measurement point groups and estimate the position and orientation of the target object.
- correspondence error readily occurs when searching for correspondences between measurement points and a shape model and those between measurement point groups. Even if the distance between erroneous corresponding points is minimized, no correct geometric relationship can be obtained, resulting in an alignment failure or unstable calculation .
- M-estimation is often used to apply a weight based on a statistical value pertaining to the distance between corresponding points, as described in "Robust ICP Registration Algorithm
- the weight is thus set large for a distance between correspondences close to the average and small for one apart from the average, thereby reducing the influence on alignment.
- This method is very effective for reducing the influence of noise such as an outlier.
- this method generates a correspondence error and cannot discriminate it from a correct
- An aspect of the present invention is to eliminate the above-mentioned problems with the
- the present invention in its first aspect provides an information processing apparatus
- first acquisition means configured to acquire a plurality of geometric features and normals at the respective geometric features, from a target object arranged at a first position
- second acquisition means configured to acquire a plurality of normals corresponding to the respective geometric features of the target object, from a shape model for the target object that is arranged at a second position different from the first position
- calculation means configured to calculate direction differences between the normals acquired by the first acquisition means and the normals acquired by the second acquisition means, for
- determination means configured to determine whether or not occlusion occurs at a geometric feature of the plurality of geometric features by comparing the direction differences calculated by the calculation means with each other.
- the present invention in its second aspect provides an information processing apparatus
- first acquisition means configured to acquire a plurality of geometric features and normals at the respective geometric features, from a first target object when the first target object exists at a first position
- second acquisition means configured to acquire a plurality of normals corresponding to the respective geometric features of the first target object, from a second target object which exists at a second position different from the first position and is identical in shape to the first target object
- calculation means configured to calculate direction differences between the normals acquired by the first acquisition means and the normals acquired by the second acquisition means, for respective pairs of corresponding geometric features of the first target object at the first position and the second target object at the second position; and determination means configured to determine whether or not occlusion occurs at a geometric feature by comparing the direction differences calculated by the calculation means with each other.
- the present invention in its third aspect provides a processing method in an information
- processing apparatus comprising: a first acquisition step of acquiring a plurality of geometric features and normals at the respective geometric features, from a target object arranged at a first position; a second acquisition step of acquiring a plurality of normals corresponding to the respective geometric features of the target object, from a shape model for the target object that is arranged at a second position different from the first position; a calculation step of
- the present invention in its fourth aspect provides a processing method in an information
- a processing apparatus comprising: a first acquisition step of acquiring a plurality of geometric features and normals at the respective geometric features, from a first target object when the first target object exists at a first position; a second acquisition step of acquiring a plurality of normals corresponding to the respective geometric features of the first target object, from a second target object which exists at a second position different from the first position and is identical in shape to the first target object; a calculation step of calculating direction differences between the normals acquired in the first acquisition step and the normals acquired in the second acquisition step, for respective pairs of corresponding geometric features of the first target object at the first position and the second target object at the second position; and a determination step of determining whether or not occlusion occurs at a geometric feature by comparing the direction differences calculated in the calculation step with each other.
- the present invention can improve the accuracy of correspondence between measurement points by decreasing a correspondence error between
- FIGs. 1A and IB are views for explaining the principle of occlusion determination based on the normal difference
- FIG. 2 is a block diagram showing the arrangement of an information processing apparatus 1;
- FIG. 3 is a flowchart showing an occlusion determination processing sequence
- FIGs. 4A and 4B are views for explaining the principle of occlusion determination based on the position difference
- FIG. 5 is a flowchart showing a processing sequence of estimating the position and orientation of a target object
- FIG. 6 is a block diagram showing the arrangement of an information processing apparatus.
- Fig. 7 is a flowchart showing a processing sequence of aligning measurement data of a target object .
- the first embodiment will explain a case in which occlusion information is acquired when estimating the position and orientation of a target object by fitting a three-dimensional shape model for the target object to three-dimensional point groups obtained by measuring the target object.
- Fig. 1A shows a target object 01 whose position and orientation are measured at the first position (first acquisition) , and a target object shape model Ml whose position and orientation at the second position are set in advance (second
- a normal difference di of a pair Kl of corresponding points and a normal difference d 2 of a pair K2 of corresponding points indicate the orientation difference between the shape model Ml and the target object 01 and thus have the same value, as shown in Fig. 1A.
- a normal difference d x ' of a pair Kl ' of corresponding points and a normal difference d 2 ' of a pair K2 ' of erroneous corresponding points owing to occlusion have different values, as shown in Fig. IB.
- measurement data p2 ' of an occluding object erroneously correspond to each other owing to occlusion.
- the orientation of the plane of the measurement data p2 1 differs from that of a target object 01' which should originally correspond to the geometric feature q2 ' .
- the normal differences di ' and d 2 1 have different values .
- the highest-frequency difference value is calculated. If the correspondence error is partial, the highest- frequency difference value (mode) is approximate to the normal difference arising from the orientation
- Fig. 2 shows the arrangement of an
- the information processing apparatus 1 in the first embodiment. As shown in Fig. 2, the information processing apparatus 1 includes a geometric feature measurement unit 110, geometric feature acquisition unit 120, occlusion information acquisition unit 130, and position/orientation calculation unit 140. The building units of the information processing apparatus 1 will be explained.
- the geometric feature measurement unit 110 measures the positions of three-dimensional points and normal directions at these positions for a plurality of geometric features of a target object.
- a camera irradiates a target with a laser beam, slit light, or pattern light by an active method, captures three- dimensional points from the reflected light, and measures the distance by triangulation .
- the distance sensor is not limited to this method, and may adopt the time-of-flight method using the flight time of light or a passive method for a stereo camera or the like .
- the normal direction of measurement data is calculated using the positions of neighboring geometric features.
- the normal direction can be calculated by performing principal component analysis for the position of a geometric feature of interest and those of neighboring geometric features, and defining the third principal component as the normal direction.
- the normal direction may be calculated by performing plane fitting to the position of a geometric feature of interest and those of neighboring geometric features.
- the representation is not limited to the normal vector and may be two vectors perpendicular to the normal as long as they represent the orientation of a plane.
- the geometric feature of a target object is not limited to a three-dimensional point, and suffices to have a position and plane orientation as attributes of the geometric feature.
- the geometric feature may be feature point groups obtained from a moving image by Structure-from-Motion, or a plane obtained by plane fitting to measurement point groups. Further, the geometric feature of a target object that has been saved in a storage device may be acquired.
- the geometric feature acquisition unit 120 acquires, from a shape model for a target object, plane positions and normal directions as geometric features corresponding to a plurality of geometric features acquired by the geometric feature measurement unit 110, and outputs a plurality of pairs of corresponding geometric features.
- the embodiment uses the plane as the geometric feature of the shape model.
- the geometric feature of the shape model is not limited to the plane.
- the geometric feature may be a three- dimensional point having the normal direction, or a geometric feature having information about the position and plane orientation.
- Nearest neighbor search is used to search for a geometric feature of a shape model that
- the shape model corresponds to a geometric feature measured by the geometric feature measurement unit 110.
- the shape model is arranged at the approximate position and orientation of the target object. The distances in the three-dimensional space between geometric features measured by the geometric feature measurement unit 110 and geometric features of the shape model are
- Geometric features having a shortest distance between them are made to correspond to each other.
- the correspondence may be made in a reverse order, and a geometric feature measured by the geometric feature measurement unit 110 that is closest to a geometric feature of the shape model may be searched for.
- the correspondence method is not limited to nearest neighbor search.
- a two- dimensional plane viewed from the sensor may be created using the focal length and the angle of view which are parameters of the camera model of the sensor used in the geometric feature measurement unit 110, and
- geometric features may be made to correspond to each other on a projection image. Also, geometric features acquired by the geometric feature measurement unit 110 and geometric features of a shape model arranged at the approximate position and orientation of the target object may be projected onto a two-dimensional plane, and geometric features closest to each other on the projection plane may be made to correspond to each other .
- the occlusion information acquisition unit 130 determines whether a correspondence error has occurred due to occlusion, and acquires the information.
- An occlusion information acquisition method will be explained with reference to an occlusion determination flowchart shown in Fig. 3.
- the CPU of the information processing apparatus 1 executes each process shown in Fig. 3.
- the occlusion information acquisition unit 130 calculates the mode of the normal differences between corresponding points using all pairs of corresponding points, and determines that no occlusion has occurred for corresponding points having a normal difference between paired corresponding points that falls within the range of the first
- the normal difference can be obtained by, for example, the following method.
- n p and n q are the normal vector of a geometric feature of a shape model and the normal vector of a measurement point, respectively.
- a vector represented by the rotation axis and rotation angle is converted into an Eulerian angle representation.
- the respective components are aligned independently, and Eulerian angles having the modes as components are converted again into a rotation angle representation about the rotation axis.
- the calculation methods of the normal difference and its mode are not limited to them, and the normal vector difference and its mode may be employed.
- the normal reference value is not limited to the mode of all the normal differences between corresponding points, and suffices to be a value equivalent to the relative orientations of measurement data and the shape model.
- the mode of normal differences may be calculated not from all correspondence pairs but from correspondence pairs extracted at random. The average may be calculated using only normal differences between corresponding points that fall within a predetermined range, and used as the normal reference value. It is also possible to create a histogram of normal differences and use a peak as the normal reference value.
- Step S0030 will be explained.
- the orientation difference is corrected using the normal reference value to separate the small position and orientation differences of the shape model, acquiring occlusion information .
- Figs. 4A and 4B are views for explaining occlusion determination based on the position
- the normal reference value is regarded as a relative orientation ds, and the difference between a position (ql', q2 ' ) obtained by rotating the position (ql, q2) of the geometric feature of either the shape model or
- the position of the geometric feature of the shape model is corrected using a normal difference that is converted into a rotation angle representation about the rotation axis.
- Either the position of a measurement point or that of the geometric feature of the shape model is rotated using the rotation axis v and rotation angle a of the mode (normal reference value) of the normal difference.
- the distance or depth value of the rotated position in the three-dimensional space is compared with that of the other position, and the difference is defined as the position difference.
- the calculation method is not limited to the above one as long as the orientation difference between the shape model and measurement data can be canceled using the normal reference value.
- the normal reference value may be converted into a
- the mode of position differences is set as reference value 2, and it is determined that no occlusion has occurred for corresponding points, the position difference between which falls within the range of reference value 2 to predetermined threshold 2.
- reference value 2 of the position difference is not limited to the mode of position differences, and may be the average of position differences or a peak of the histogram of position differences.
- Occlusion information is not limited to the presence/absence of occlusion. As occlusion
- likelihood of occlusion may be output as successive values corresponding to differences between normal differences between corresponding points and the normal reference value.
- likelihood of occlusion may be output as successive values
- the index of the likelihood may be calculated in accordance with equations such as equations (4) to (6):
- i is a sign for uniquely identifying the normal or position
- r ⁇ is the index of likelihood
- gi is the vector of the normal or position difference
- ci is the reference value of the normal or position difference
- si is the standard deviation of the normal difference or position difference
- f is the weight function.
- the weight function f is arbitrary, such as the Tukey function or the Huber function shown in equation (6), as long as it gives a small weight to data having a large error x and a large weight to data having a small error x, where t is a constant.
- the product of occlusion information based on the normal difference in step S0020 and occlusion information based on the position difference in step S0050 serves as an output from the occlusion information acquisition unit 130.
- the occlusion information may be either occlusion information based on the normal difference or occlusion information based on the position difference, or a combination of them such as the sum of them.
- acquisition unit 130 has determined that no occlusion has occurred.
- the position and orientation are
- the method is arbitrary as long as the position and orientation of a target object are estimated using an evaluation function based on the differences between geometric features of the shape model and geometric features measured by the geometric feature measurement unit 110.
- the occlusion information acquisition unit 130 calculates a numerical value indicating the likelihood of occlusion for each pair of geometric features, the product of the
- evaluation function multiplied by the numerical value as a weight may be minimized.
- FIG. 5 is a flowchart showing a
- step S1010 the geometric feature
- the measurement unit 110 measures measurement data of a geometric feature of a target object.
- the geometric feature acquisition unit 120 acquires a geometric feature of a shape model that corresponds to the geometric feature measured in step S1010, and outputs a pair of corresponding geometric features.
- step S1030 the occlusion information acquisition unit 130 acquires occlusion information of the pair of the geometric features of the measurement data and shape model that has been acquired in step S1020.
- the presence/absence of occlusion is determined as occlusion information for each pair of corresponding geometric features.
- a numerical value indicating the likelihood of occlusion may be calculated.
- step S1040 the position/orientation calculation unit 140 updates the position and
- the position and orientation are calculated by repetitively correcting the approximate values of the position and orientation of the target object by iterative operation until it is determined in step S1050 that the position and orientation converge.
- the calculation method may be an optimization method such as the Levenberg-Marquardt method or steepest descent method. Another nonlinear optimization calculation method such as the conjugate gradient method is also possible .
- step S1050 the position/orientation calculation unit 140 executes convergence determination. If the position and orientation converge, the process ends; if NO, the position and orientation of the target object are set as an approximate position and
- the position and orientation are determined to converge when the difference between the sums of squares of error vectors before and after updating the position and orientation is almost zero.
- the determination condition is not limited to this.
- the position and orientation are determined to converge when the update amounts of the position and orientation are almost zero.
- Fig. 6 shows the arrangement of an
- the information processing apparatus 2 in the second embodiment. As shown in Fig. 6, the information processing apparatus 2 includes a geometric feature measurement unit 210, geometric feature acquisition unit 220, occlusion information acquisition unit 230, and alignment unit 240. The building units of the information processing apparatus 2 will be explained.
- the geometric feature measurement unit 210 measures the positions and normal directions (plane orientations) of geometric features of a target object.
- the position and normal direction of a three-dimensional point are measured, but the geometric feature is arbitrary as long as it has the position and plane orientation as attributes.
- the position and normal direction of the geometric feature of a target object that have been saved in a storage device may be acquired.
- measurement data 1 geometric features measured by the geometric feature measurement unit 210
- the geometric feature acquisition unit 220 measures the positions and normal directions (plane orientations) of geometric features of the target object, and outputs pairs of geometric features corresponding to
- measurement data 1 the position and normal direction of a three-dimensional point are measured, but the geometric feature is arbitrary as long as it has a position and plane orientation as attributes.
- the position and normal direction of the geometric feature of the target object which have been saved in a storage device may be acquired, geometric features different from measurement data 1 are acquired.
- geometric features acquired by the geometric feature acquisition unit 220 will be referred to as measurement data 2. After acquiring geometric features, the geometric feature acquisition unit 220 searches for correspondence between measurement data 1 and
- the occlusion information acquisition unit 230 acquires occlusion information between corresponding geometric features. The difference from the occlusion
- the alignment unit 240 aligns measurement data 1 and measurement data 2 using pairs of geometric features for which the occlusion information
- acquisition unit 230 has determined that no occlusion has occurred.
- the alignment is done by minimizing an evaluation function based on the distances in the three-dimensional space between paired geometric features. This method is arbitrary as long as the position and orientation of a target object are
- the occlusion information acquisition unit 230 calculates a numerical value indicating the likelihood of occlusion for each pair of geometric features, the product of the evaluation function multiplied by the numerical value as a weight may be minimized.
- Fig. 7 is a flowchart showing an alignment processing sequence for a plurality of three- dimensional points in the second embodiment.
- apparatus 2 executes each process shown in Fig. 7. In this processing, one of measurement data 1 and
- measurement data 2 is used as reference data, the relative position and orientation of the other
- measurement data with respect to the reference data are calculated by repetitively correcting them by iterative operation, and the measurement data are aligned with each other.
- Three-dimensional points obtained by integrating the aligned measurement data into one coordinate system serve as points representing the three-dimensional shape of the target object.
- the geometric feature measurement unit 210 and geometric feature acquisition unit 220 measure and acquire measurement data of geometric features of the target object, respectively.
- the measurement data include the
- an approximate position and orientation at the measurement viewpoint a value measured using the GPS and inertial sensor is given as the approximate position and orientation at the measurement viewpoint.
- the approximate position and orientation suffice to provide the approximate relative positions and orientations of measurement data, and may be measured using any sensor as long as they can be obtained.
- An approximate position and orientation may be given manually, or calculated by manually giving the correspondence between measurement data.
- step S2020 the geometric feature acquisition unit 220 makes the geometric features of the measurement data that have been measured in steps S2000 and S2010 to correspond to each other.
- the second embodiment adopts nearest neighbor search to make a geometric feature of one measurement data correspond to the nearest geometric feature of the other measurement data based on the approximate position and orientation.
- the correspondence search method is not limited to this, and may be a method of making geometric features correspond to each other by projecting them onto an image according to a conventional technique.
- step S2030 the occlusion information acquisition unit 230 acquires occlusion information of the pair of the geometric features of the measurement data that have been made to correspond to each other in step S2020.
- the occlusion information acquisition unit 230 acquires occlusion information of the pair of the geometric features of the measurement data that have been made to correspond to each other in step S2020.
- a numerical value indicating the likelihood of occlusion may be calculated.
- step S2040 the alignment unit 240 updates the measured position and orientation obtained when the measurement data was measured by a nonlinear optimization method using a pair of geometric features. In this processing, the distance in the three- dimensional space between paired corresponding
- measurement points is minimized by the Gauss-Newton method.
- the measured position and orientation of each measurement data are repetitively corrected by
- step S2050 iterative operation until it is determined in step S2050 that the measured position and orientation converge.
- the measured position/orientation calculation method is not limited to this.
- calculation method may be a conventional optimization method such as the Levenberg- arquardt method or steepest descent method, or another nonlinear
- optimization calculation method such as the conjugate gradient method.
- step S2050 the alignment unit 240 executes convergence determination. If the position and orientation converge, the process ends; if NO, the measured position and orientation are updated, and the process returns to step S2020.
- the measured position and orientation are determined to converge when the difference between the sums of squares of error vectors before and after updating the measured position and orientation is almost zero.
- the determination condition is not limited to this. For example, the measured position and orientation are determined to converge when the update amounts of the measured position and orientation are almost zero.
- measurement data are integrated into one coordinate system based on the estimated measured position and orientation, and the integrated data is output as a set of geometric features representing a three-dimensional shape.
- the second embodiment two three- dimensional point groups are aligned with each other.
- the number of measurement data groups is not limited to two, and may be three or more.
- the second embodiment is applied to two arbitrary
- the first and second embodiments can be applied to measurement of the three- dimensional shape of an object, object recognition, estimation of the self-position of a robot, and
- aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described
- embodiment (s) and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment (s) .
- the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (for example, computer- readable medium) .
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/701,281 US8971576B2 (en) | 2010-06-18 | 2011-06-09 | Information processing apparatus and processing method thereof |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2010-139947 | 2010-06-18 | ||
| JP2010139947A JP5615055B2 (ja) | 2010-06-18 | 2010-06-18 | 情報処理装置及びその処理方法 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011158886A1 true WO2011158886A1 (en) | 2011-12-22 |
Family
ID=44627682
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2011/063755 Ceased WO2011158886A1 (en) | 2010-06-18 | 2011-06-09 | Information processing apparatus and processing method thereof |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US8971576B2 (enExample) |
| JP (1) | JP5615055B2 (enExample) |
| WO (1) | WO2011158886A1 (enExample) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5671281B2 (ja) | 2010-08-20 | 2015-02-18 | キヤノン株式会社 | 位置姿勢計測装置、位置姿勢計測装置の制御方法及びプログラム |
| JP6004809B2 (ja) * | 2012-03-13 | 2016-10-12 | キヤノン株式会社 | 位置姿勢推定装置、情報処理装置、情報処理方法 |
| JP6092530B2 (ja) | 2012-06-18 | 2017-03-08 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
| US9256788B2 (en) * | 2012-11-02 | 2016-02-09 | Qualcomm Incorporated | Method for initializing and solving the local geometry or surface normals of surfels using images in a parallelizable architecture |
| JP6325896B2 (ja) * | 2014-03-28 | 2018-05-16 | 株式会社キーエンス | 光学式座標測定装置 |
| JP6869023B2 (ja) * | 2015-12-30 | 2021-05-12 | ダッソー システムズDassault Systemes | 探索のための3dから2dへの再画像化 |
| US20180268614A1 (en) * | 2017-03-16 | 2018-09-20 | General Electric Company | Systems and methods for aligning pmi object on a model |
| JP7257752B2 (ja) * | 2018-07-31 | 2023-04-14 | 清水建設株式会社 | 位置検出システム |
| JP7111297B2 (ja) * | 2018-11-26 | 2022-08-02 | 株式会社豊田中央研究所 | 位置ずれ補正装置及びプログラム |
| KR20220039059A (ko) * | 2020-09-21 | 2022-03-29 | 엘지전자 주식회사 | 식기 세척기 및 식기 세척기의 3차원 이미지 획득 방법 |
| CN112083415B (zh) * | 2020-10-12 | 2021-10-29 | 吉林大学 | 一种基于3d信息的毫米波雷达模型目标可见性判断方法 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1574818A2 (en) * | 2004-03-09 | 2005-09-14 | General Electric Company | Non-contact measurement method and apparatus |
| WO2007129047A1 (en) * | 2006-05-04 | 2007-11-15 | Isis Innovation Limited | Scanner system and method for scanning |
| WO2008033329A2 (en) * | 2006-09-15 | 2008-03-20 | Sciammarella Cesar A | System and method for analyzing displacements and contouring of surfaces |
| JP2010139947A (ja) | 2008-12-15 | 2010-06-24 | Pioneer Electronic Corp | 画像信号処理方法及び画像信号処理装置 |
Family Cites Families (29)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09178463A (ja) * | 1995-12-26 | 1997-07-11 | Nikon Corp | 多次元座標測定機 |
| US6858826B2 (en) * | 1996-10-25 | 2005-02-22 | Waveworx Inc. | Method and apparatus for scanning three-dimensional objects |
| US6406292B1 (en) * | 1999-05-13 | 2002-06-18 | Align Technology, Inc. | System for determining final position of teeth |
| US7068825B2 (en) * | 1999-03-08 | 2006-06-27 | Orametrix, Inc. | Scanning system and calibration method for capturing precise three-dimensional information of objects |
| JP2001118082A (ja) * | 1999-10-15 | 2001-04-27 | Toshiba Corp | 描画演算処理装置 |
| US6767208B2 (en) * | 2002-01-10 | 2004-07-27 | Align Technology, Inc. | System and method for positioning teeth |
| US7787692B2 (en) * | 2003-09-25 | 2010-08-31 | Fujifilm Corporation | Image processing apparatus, image processing method, shape diagnostic apparatus, shape diagnostic method and program |
| US7230620B2 (en) * | 2004-08-05 | 2007-06-12 | Mitsubishi Electric Research Laboratories, Inc. | Rendering deformable and animated surface reflectance fields |
| US20090123892A1 (en) * | 2004-09-24 | 2009-05-14 | Cat Corporation | Human Body Information Extraction Device, Human Body Imaging Information Reference Plane Conversion Method, and Cross Section Information Detection Device |
| GB0520829D0 (en) * | 2005-10-13 | 2005-11-23 | Univ Cambridge Tech | Image processing methods and apparatus |
| US7844356B2 (en) * | 2006-07-19 | 2010-11-30 | Align Technology, Inc. | System and method for automatic construction of orthodontic reference objects |
| US7813592B2 (en) * | 2006-08-09 | 2010-10-12 | Siemens Medical Solutions Usa, Inc. | System and method for non-rigid multi-modal registration on the GPU |
| JP4757142B2 (ja) * | 2006-08-10 | 2011-08-24 | キヤノン株式会社 | 撮影環境校正方法及び情報処理装置 |
| GB0615956D0 (en) * | 2006-08-11 | 2006-09-20 | Univ Heriot Watt | Optical imaging of physical objects |
| GB0707454D0 (en) * | 2007-04-18 | 2007-05-23 | Materialise Dental Nv | Computer-assisted creation of a custom tooth set-up using facial analysis |
| JP5120926B2 (ja) * | 2007-07-27 | 2013-01-16 | 有限会社テクノドリーム二十一 | 画像処理装置、画像処理方法およびプログラム |
| GB2458927B (en) * | 2008-04-02 | 2012-11-14 | Eykona Technologies Ltd | 3D Imaging system |
| JP4435867B2 (ja) * | 2008-06-02 | 2010-03-24 | パナソニック株式会社 | 法線情報を生成する画像処理装置、方法、コンピュータプログラム、および、視点変換画像生成装置 |
| JP5317169B2 (ja) * | 2008-06-13 | 2013-10-16 | 洋 川崎 | 画像処理装置、画像処理方法およびプログラム |
| JP2010033298A (ja) * | 2008-07-28 | 2010-02-12 | Namco Bandai Games Inc | プログラム、情報記憶媒体及び画像生成システム |
| TW201017578A (en) * | 2008-10-29 | 2010-05-01 | Chunghwa Picture Tubes Ltd | Method for rebuilding 3D surface model |
| US20100259746A1 (en) * | 2009-04-10 | 2010-10-14 | Omron Corporation | Profilometer |
| US8717578B2 (en) * | 2009-04-10 | 2014-05-06 | Omron Corporation | Profilometer, measuring apparatus, and observing apparatus |
| JP5430456B2 (ja) * | 2010-03-16 | 2014-02-26 | キヤノン株式会社 | 幾何特徴抽出装置、幾何特徴抽出方法、及びプログラム、三次元計測装置、物体認識装置 |
| JP5170154B2 (ja) * | 2010-04-26 | 2013-03-27 | オムロン株式会社 | 形状計測装置およびキャリブレーション方法 |
| JP5343042B2 (ja) * | 2010-06-25 | 2013-11-13 | 株式会社トプコン | 点群データ処理装置および点群データ処理プログラム |
| EP2426612B1 (en) * | 2010-08-27 | 2019-03-13 | Dassault Systèmes | Watermarking of a 3D modeled object |
| US8818773B2 (en) * | 2010-10-25 | 2014-08-26 | Vistaprint Schweiz Gmbh | Embroidery image rendering using parametric texture mapping |
| US9056017B2 (en) * | 2012-03-08 | 2015-06-16 | Brett Kotlus | 3D design and fabrication system for implants |
-
2010
- 2010-06-18 JP JP2010139947A patent/JP5615055B2/ja active Active
-
2011
- 2011-06-09 WO PCT/JP2011/063755 patent/WO2011158886A1/en not_active Ceased
- 2011-06-09 US US13/701,281 patent/US8971576B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1574818A2 (en) * | 2004-03-09 | 2005-09-14 | General Electric Company | Non-contact measurement method and apparatus |
| WO2007129047A1 (en) * | 2006-05-04 | 2007-11-15 | Isis Innovation Limited | Scanner system and method for scanning |
| WO2008033329A2 (en) * | 2006-09-15 | 2008-03-20 | Sciammarella Cesar A | System and method for analyzing displacements and contouring of surfaces |
| JP2010139947A (ja) | 2008-12-15 | 2010-06-24 | Pioneer Electronic Corp | 画像信号処理方法及び画像信号処理装置 |
Non-Patent Citations (2)
| Title |
|---|
| "Robust ICP Registration Algorithm Extended by M-estimation", KONDO, MIYAMOTO, KANEKO, IGARASHI, IEICE TECHNICAL REPORT, PATTERN RECOGNITION AND MEDIA UNDERSTANDING (PRMU, vol. 100, no. 507, 2001, pages 21 - 26 |
| J. BESL, N.D. MCKAY: "A method for registration of 3-D shapes", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 14, no. 2, 1992, pages 239 - 256, XP001013705, DOI: doi:10.1109/34.121791 |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2012003638A (ja) | 2012-01-05 |
| US20130094706A1 (en) | 2013-04-18 |
| US8971576B2 (en) | 2015-03-03 |
| JP5615055B2 (ja) | 2014-10-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8971576B2 (en) | Information processing apparatus and processing method thereof | |
| JP5618569B2 (ja) | 位置姿勢推定装置及びその方法 | |
| US20210190497A1 (en) | Simultaneous location and mapping (slam) using dual event cameras | |
| JP5832341B2 (ja) | 動画処理装置、動画処理方法および動画処理用のプログラム | |
| JP5671281B2 (ja) | 位置姿勢計測装置、位置姿勢計測装置の制御方法及びプログラム | |
| CN102472609B (zh) | 位置和姿势校准方法及设备 | |
| US20130230235A1 (en) | Information processing apparatus and information processing method | |
| JP6736257B2 (ja) | 情報処理装置、情報処理方法、プログラム | |
| WO2011105615A1 (en) | Position and orientation measurement apparatus, position and orientation measurement method, and program | |
| KR20100104581A (ko) | 이동 로봇에서 자신의 위치를 추정하기 위한 방법 및 장치 | |
| JP6677522B2 (ja) | 情報処理装置、情報処理装置の制御方法およびプログラム | |
| CN113012224B (zh) | 定位初始化方法和相关装置、设备、存储介质 | |
| US11571125B2 (en) | Line-of-sight measurement device | |
| KR102289688B1 (ko) | 광학식 위치 추적 시스템의 3차원 마커 좌표 추정 방법 | |
| EP3155369B1 (en) | System and method for measuring a displacement of a mobile platform | |
| JP5976089B2 (ja) | 位置姿勢計測装置、位置姿勢計測方法、およびプログラム | |
| JP2008309595A (ja) | オブジェクト認識装置及びそれに用いられるプログラム | |
| JP2018041431A (ja) | 対応関係を考慮した点群マッチング方法、対応関係を考慮した点群マッチング装置及びプログラム | |
| CN112344966B (zh) | 一种定位失效检测方法、装置、存储介质及电子设备 | |
| CN119104050A (zh) | 载体位置的确定方法、装置、计算机设备及存储介质 | |
| JP3221384B2 (ja) | 三次元座標計測装置 | |
| CN113504385A (zh) | 复数相机测速方法及测速装置 | |
| JP5609667B2 (ja) | 運動推定装置及びプログラム | |
| EP4597429A1 (en) | Information processing device, information processing method, and storage medium | |
| KR20250084833A (ko) | 보행 로봇 및 보행 로봇의 위치추정 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11728430 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 13701281 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11728430 Country of ref document: EP Kind code of ref document: A1 |