WO2007015059A1 - Procédé et système pour saisie de données en trois dimensions - Google Patents

Procédé et système pour saisie de données en trois dimensions Download PDF

Info

Publication number
WO2007015059A1
WO2007015059A1 PCT/GB2006/002715 GB2006002715W WO2007015059A1 WO 2007015059 A1 WO2007015059 A1 WO 2007015059A1 GB 2006002715 W GB2006002715 W GB 2006002715W WO 2007015059 A1 WO2007015059 A1 WO 2007015059A1
Authority
WO
WIPO (PCT)
Prior art keywords
shapes
array
projected
scene
training
Prior art date
Application number
PCT/GB2006/002715
Other languages
English (en)
Inventor
James Paterson
Andrew Fitzgibbon
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Publication of WO2007015059A1 publication Critical patent/WO2007015059A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings

Definitions

  • the present invention relates to a method and system for obtaining three-dimensional data relating to a physical scene .
  • Three-dimensional capture techniques are used to obtain three-dimensional data relating to a physical scene or object based on two-dimensional images of the scene or object. These techniques are becoming increasingly important in the field of computer graphics for applications such as virtual reality and the film industry. Three-dimensional capture techniques can be conveniently categorised into techniques designed for the reconstruction of static scenes, and those designed to capture moving objects (dynamic scenes) . Furthermore, some systems provide near instantaneous (real-time) reconstruction for continual feedback, whilst others rely on offline processing of a captured image sequence.
  • multiple images of the scene must be captured under different illuminations.
  • the requirement for multiple images means that corresponding image points must be identified in each of the multiple images (i.e. the stereo correspondence problem) .
  • the need for multiple images with different illuminations imposes the limitation that the scene must remain static or move very slowly during capture.
  • such techniques are usually only suitable for the reconstruction of static scenes.
  • the use of high resolution capture devices, such as current digital stills cameras, is precluded because of the long delay between exposures.
  • a method of obtaining three-dimensional data relating to a physical scene comprising (a) projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; (b) capturing an image of the array projected onto the scene; (c) deriving correspondences between the shapes in the captured image to the finite array of projected shapes, based upon the uniqueness properties; and (d) obtaining three-dimensional data points from the correspondence between the projected array and the captured image array.
  • the method therefore enables three-dimensional data relating to the scene to be captured from just one two- dimensional image.
  • the method may be used to capture dynamic scenes since there is no need for camera synchronisation and the method is not limited by slow shutter speeds. Furthermore, the method requires only a single camera and projector so the system is relatively cheap as well as being easy to set up.
  • the step of projecting an array of shapes onto the scene means that data may be acquired from coloured objects.
  • a further benefit of the current invention is that high resolution data is acquired around the edges of each shape.
  • the projected array has uniqueness properties along mutually non-parallel lines. More advantageously, the mutually non-parallel lines are epipolar lines .
  • the present invention is able to powerfully disambiguate the correspondence problem (i.e. the array need only have uniqueness properties in one direction) .
  • the method further comprises obtaining calibration data.
  • the calibration data comprises a fundamental matrix.
  • the step of obtaining calibration data further comprises resolving a projective ambiguity.
  • the step (c) further comprises detecting edges of the shapes from the captured image array. More advantageously, the step of detecting edges of the shapes comprises determining edgels corresponding to intensity gradients within the captured image. Still more advantageously, the step (c) further comprises representing the edges of the imaged shapes as shape vectors . More advantageously again, the step (c) further comprises classifying the shapes using the shape vectors.
  • the method further comprises projecting training arrays of shapes onto a training scene; capturing training images of the training arrays projected onto the training scene; and comparing the training images with the training arrays to obtain training data.
  • the step (c) further comprises classifying the shapes using the training data.
  • the step (c) further comprises rectifying the projected array and the captured image.
  • the step (c) further comprises grouping the imaged shapes according to respective lines of shapes in the projected image, the lines of projected shapes being oriented along lines having uniqueness properties. More advantageously, the method further comprises ordering the groups of shapes. Still more advantageously, the method further comprises aligning the ordered groups of imaged shapes with respective lines of projected shapes using the uniqueness properties. In a preferred embodiment , dynamic programming is used to align the ordered groups of imaged shapes with respective lines of projected shapes.
  • the three dimensional data points are obtained by triangulation.
  • the method further comprises regularising the three-dimensional data points using a weak smoothness constraint.
  • the shapes are selected from the group consisting of circles, triangles, squares, diamonds and stars .
  • a system for obtaining three-dimensional data relating to a physical scene comprising a projector for projecting a predetermined two-dimensional finite array of shapes onto the scene, the projected array having uniqueness properties in at least one dimension thereof; a camera for capturing an image of the array projected onto the scene; and processing means for obtaining the three-dimensional data from the image.
  • Figure 1 is a camera-projector system according to one embodiment of the present invention
  • Figure 2 shows a flow chart describing a method of obtaining three-dimensional data relating to a physical scene in accordance with the present invention
  • Figure 3 shows a flow chart describing one of the steps in the method illustrated in Figure 2, in further detail;
  • Figure 4a shows an example of a captured image of an array of shapes projected by the projector of Figure 1
  • Figure 4b shows the image of Figure 4a, as captured by the camera of Figure 1, the image having been processed to show detected shape edges;
  • Figure 5 shows three examples of shape vectors obtained from captured images of shapes; and Figure 6 shows a flow chart describing a further one of the steps in the method illustrated in Figure 2, in more detail .
  • FIG. 1 shows an embodiment of a camera-projector system 10 for obtaining three-dimensional data relating to a three-dimensional physical scene 12, in accordance with an embodiment of the present invention.
  • a projector 14 projects a known array of shapes 16 onto the scene 12.
  • a camera 18 captures an image 20 of the array of shapes 16 projected onto the scene 12. Given the correspondence between the shapes in the projected array 16 and the shapes in the captured image 20, it is possible to obtain three- dimensional data relating to the scene 12.
  • the projector 14 and the camera 18 are directed towards the scene 12 at an acute angle relative to one another. In this embodiment, the projector 14 and the camera 18 are located in the same vertical plane. In an alternative embodiment, the camera 18 is displaced horizontally with respect to the projector 14.
  • Figure 2 shows a flow chart of the steps in the method of obtaining the three-dimensional data relating to the scene 12.
  • the camera-projector system 10 is calibrated to obtain calibration data concerning the positions and internal parameters of the projector 14 and camera 18.
  • the known array of shapes 16 is projected onto the scene 12 with the projector 14.
  • an image 20 of the projected array of shapes 16 is captured with the camera 18.
  • the shapes in the captured image 20 are classified into shape types (e.g. circle, triangle, etc) .
  • one- to-one shape correspondences are identified between shapes in the captured image 20 and the shapes in the projected array 16.
  • the one-to-one shape correspondences are used to obtain three-dimensional data relating to the physical scene 12.
  • the system is calibrated to determine calibration data comprising positions, orientations and internal parameters (e.g. focal length) of the projector 14 and the camera 18 as 3x4 projection matrices P P and P c respectively.
  • the calibration step 30 need only be performed once for a given set up of the camera-projector system 10.
  • the calibration data may be determined by computing a fundamental matrix F of the system and by then resolving a projective ambiguity which remains after computation of the fundamental matrix F.
  • the fundamental matrix F is computed by projecting a sequence of images in which only a single pixel is illuminated. This enables the camera 18 to capture a corresponding sequence of images of single scene points.
  • the coordinates of the captured image points are determined by image processing.
  • a linear constraint is provided on the 3x3 fundamental matrix F.
  • Each captured image provides another such p ⁇ ⁇ -» c ⁇ correspondence so that F may be determined from several such correspondences.
  • P P and Pc are then determined from F up to a projective ambiguity which comprises a choice of coordinates in projective space.
  • the projective ambiguity may be understood by considering the projector 14 and the camera 18 together being moved relative to the scene 12 such that their projective relationship remains the same while their positions and orientations are altered in real space.
  • the projective ambiguity is resolved by locating particular scene points in real space.
  • the projective ambiguity may be resolved by imaging a scene consisting of a calibration object having identifiable features across three axes, e.g.
  • a two-dimensional homography H is first derived mapping from the imaged plane of the target to the projector view.
  • F the three-dimensional location of, for example, the imaged corners of the planes is resolved up to the unknown projective ambiguity.
  • a three-dimensional homography H SPACK is then computed between the projectively ambiguous space and the known coordinate frame of the calibration object, which provides a general mapping from ambiguous space to the real space of the planar targets.
  • a known array of shapes 16 is projected onto the scene 12 at 32, and an image 20 of the array 16 is captured by the camera 18 at 34.
  • the known array of shapes 16 comprises a predetermined two-dimensional pattern of a finite array of shapes, the array having uniqueness properties along the epipolar lines of the camera projector system.
  • the epipolar lines are approximately vertical so that the array of shapes 16 has column uniqueness properties.
  • the camera and projector are located in the same horizontal plane and the epipolar lines of the system are approximately horizontal so that the array of shapes 16 has row uniqueness properties. Further alternative arrangements are also possible.
  • the shapes in the finite array are selected from a finite number of shape types.
  • the projected shapes are sufficiently distinct that they can be distinguished under moderate to severe distortion.
  • the shapes are simple enough that blurring during image capture does not disguise small details of the shapes.
  • the shape types are circles, diamonds and triangles.
  • squares and stars may additionally be used. It will be appreciated that other shape types are also possible. For example, other geometric shapes may be used, or the finite array of shapes may be made up from letters and/or numbers.
  • the shapes in the image 20 are classified into shape types at 36.
  • the step 36 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 3.
  • the edges of the imaged shapes are detected.
  • the edges of the imaged shapes are represented as shape vectors .
  • the shape vectors are classified into shape types using training data obtained at 56. It will be appreciated that the steps shown in Figure 3 provide one possible method of classifying the shapes into shape types . In alternative embodiments, morphological operators or patch comparison may be used to detect and vectorize the imaged shapes. Nonetheless, the steps shown in Figure 3 are described in more detail below.
  • the image processing to detect edges of the imaged shapes at 50 is invariant to variations caused by surface reflectance changes and allows accurate localisation of the shape boundaries.
  • One particular edge detection method is described below, however it will be appreciated that other known edge detection methods could be used in alternative embodiments.
  • a local implementation of the Canny operator is applied to the captured image 20 to determine edgels corresponding to intensity gradients within the captured image 20.
  • the edgels comprise sub-pixel accurate positions, directions and magnitudes.
  • an array of solid white shapes on a black background is projected to give maximum contrast between the shapes and the background.
  • an array of solid black shapes on a white background may be used. The projection of a black and white array of shapes ensures that the method may be used to obtain three-dimensional data relating to a coloured scene.
  • the output is an unconnected, non-ordered list of edgels.
  • the next step is to link the edgels into groups corresponding to individual shapes in the captured image 20. Linking edgels into shape edges is a common process which will not be described here, but about 95% of shapes are cleanly detected on average.
  • Figure 4a is an image 60 of a projected array of shapes
  • Figure 4b is a corresponding processed image 62 showing detected shape edges.
  • the physical scene is a human face.
  • Many image shapes 64 are cleanly detected, other image shapes 66 merge over depth discontinuities, and an image shape 68 near an eyebrow is not detected due to the noisy three-dimensional surface of the eyebrow.
  • Figure 5 illustrates the step 52 for three example imaged shapes 70, 72 and 74.
  • the groups of edgels corresponding to the individual shapes 70, 72 and 74 are fitted to ellipses which are then transformed into transformed ellipses 76, 78 and 80 respectively by mapping to the unit circle 82.
  • the transformed two-dimensional locations of the points on the transformed ellipses 76, 78 and 80 are then converted to polar coordinates (r, ⁇ ).
  • the right hand column of Figure 5 shows graphs 84, 86 and 88 of r against ⁇ for the three transformed ellipses 76, 78 and 80 respectively.
  • the graphs of r are then sampled at regular intervals in ⁇ to obtain D- dimensional shape vectors.
  • Values of D from 10 to 36 provide reasonable results, but it will be appreciated that other values of D are also possible.
  • the shape vectors are classified into shape types using training data obtained at 56.
  • the training data are obtained by using the projector 14 to project training arrays of shapes onto a training scene.
  • the training arrays may comprise projected arrays of shapes of a single shape type.
  • three training arrays are used: a first training array comprising only circles, a second training array comprising only triangles, and a third training array comprising only diamonds.
  • the camera 18 is used to capture training images of the training arrays projected onto the training scene.
  • Shape vectors are calculated for the imaged training shapes in the same way as described above .
  • Image shape vectors are classified into shape types using a nearest-neighbour classifier based on the shape vectors for the training images.
  • shapes are labelled with their shape type if a consensus among five nearest neighbours can be reached, or as "unknown" otherwise. Typically, about 95% of imaged shapes are correctly identified in this way.
  • one-to-one correspondences are identified between shapes in the captured image 20 and shapes in the projected array 16 at 38.
  • the step 38 of classifying the shapes in the image 20 into shape types is shown in more detail in Figure 6.
  • the projected array 16 and the captured image 20 are rectified.
  • imaged shapes are grouped according to respective columns of the projected image.
  • the groups of shapes are ordered into lists.
  • the ordered lists of shapes are aligned with the known columns of the array of projected shapes 16.
  • one-to-one shape correspondences are identified between the projected array 16 and the captured image 20.
  • the calibrated fundamental matrix F is used to rectify the projected array 16 and the captured image 20 so that the columns of the projected array 16 fall along epipolar lines of the camera-projector system 10.
  • Epipolar lines corresponding to the horizontally central points of the columns in the projected array 16 are then identified.
  • Shapes are then assigned to columns at 92 by scanning along the central column epipolar lines and finding shapes which ⁇ intersect the scan lines. Shapes are then ordered along the scan lines by sorting on the vertical position of the shapes ' centroids . Shapes are therefore able to be ordered along scan lines at 94 using only the column uniqueness properties of the array 16.
  • the method powerfully disambiguates the correspondence problem because the array of shapes 16 only requires uniqueness properties in one dimension thereof. Furthermore, the requirement for uniqueness properties in only one dimension considerably simplifies the construction of the projected array of shapes 16. A small percentage of imaged shapes may have been misclassified, and some imaged shapes may have been classified as "unknown” . In addition, occlusion and/or the limitations of the shape classification step 36 may lead to missing shapes in the image 20. Therefore, the extraction of shapes along scan lines is usually imperfect. Therefore, at 96, the ordered lists of imaged shapes are aligned with known lists of shapes corresponding to the columns of the array of projected shapes 16.
  • the alignment optimisation problem is well known.
  • the alignment is optimised using a dynamic programming technique.
  • This dynamic programming technique does not form a part of the present invention and will therefore be described only briefly here.
  • the DP problem is visualized in this application via a 2D graph structure, with the list of observed shapes on one axis and known projected shapes on the other. A correspondence between observed and known shape type provides a point on the graph, and typically the point is assigned a score indicating the confidence of the match; the DP task is thus to find the highest scoring path from approximately lower left to the upper right sides of the graph.
  • the dynamic programming technique is enhanced using the uniqueness properties of the projected array 16.
  • the uniqueness properties mean that aligned chains of adjacent shapes become less likely to occur randomly as the length of the chain increases. Therefore, the dynamic programming algorithm is written so as to be biased towards longer aligned chains of shapes. In this embodiment, the algorithm is biased towards aligned chains of length greater than two.
  • the output of the dynamic programming algorithm is an optimised set of one-to-one correspondences between shapes in the projected array 16 and shapes in the captured image 20.
  • the set of one-to-one shape correspondences is used to obtain three-dimensional data relating to the physical scene 12. This is done by using the one-to-one shape correspondences to obtain one-to-one point correspondences between points in the projected array 16 and points in the captured image 20. Due to the rectification step 90, the epipolar lines intersect the projected shapes at only two points per shape. Similarly, the epipolar lines intersect the imaged shapes at only two points per shape.
  • the epipolar lines which intersect the projected and imaged shapes are densely sampled to obtain two point correspondences per shape per epipolar line.
  • the point correspondences are then triangulated using the projection matrices P P and P c to obtain a dense three-dimensional representation of the shape boundaries up to the projective ambiguity.
  • the three-dimensional representation may be transformed from ambiguous space coordinates to real space coordinates by multiplying by the three-dimensional homography H SPACE -
  • the method enables high resolution three-dimensional data to be obtained from a single two- dimensional image.
  • the surface of the scene visible to the camera-projector system is known within the constraints of imaging noise. Some noise may be removed by regularising the three dimensional data using a weak smoothness constraint. Alternatively, the smoothness constraint need not be applied.
  • shape- by-shape triangulation is easily performed by connecting three-dimensional data only within shapes, giving a partial surface suitable for rendering on PC graphics hardware.
  • An alternative is to triangulate the complete point set, in which case a complete surface area manifold is presented. In either case, the data presented can be considered as a height map or as a range/depth image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne un procédé pour obtenir des données en trois dimensions se rapportant à une scène physique, comprenant (a) la projection d’une matrice finie en deux dimensions prédéterminée de formes sur la scène, la matrice projetée ayant des propriétés d’unicité dans au moins une dimension de celui-ci ; (b) la saisie d’une image de la matrice projetée sur la scène ; (c) la dérivation de correspondances entre les formes dans l’image saisie vers la matrice finie de formes projetées, sur la base des propriétés de spécificité ; et (d) l’acquisition des points de données en trois dimensions à partir de la correspondance entre la matrice projetée et la matrice d’images saisies. Il est également décrit un système pour acquérir les données en trois dimensions relatives à une scène physique.
PCT/GB2006/002715 2005-08-02 2006-07-20 Procédé et système pour saisie de données en trois dimensions WO2007015059A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0515915.7 2005-08-02
GBGB0515915.7A GB0515915D0 (en) 2005-08-02 2005-08-02 Method and system for three-dimensional data capture

Publications (1)

Publication Number Publication Date
WO2007015059A1 true WO2007015059A1 (fr) 2007-02-08

Family

ID=34983971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/002715 WO2007015059A1 (fr) 2005-08-02 2006-07-20 Procédé et système pour saisie de données en trois dimensions

Country Status (2)

Country Link
GB (1) GB0515915D0 (fr)
WO (1) WO2007015059A1 (fr)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013076583A3 (fr) * 2011-11-25 2013-12-27 Universite De Strasbourg Procédé de vision active pour système d'imagerie stéréo et système correspondant
EP2779027A1 (fr) * 2013-03-13 2014-09-17 Intermec IP Corp. Systèmes et procédés permettant d'améliorer le dimensionnement, par exemple celui de volumes
US8988590B2 (en) 2011-03-28 2015-03-24 Intermec Ip Corp. Two-dimensional imager with solid-state auto-focus
US9007368B2 (en) 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US10134120B2 (en) 2014-10-10 2018-11-20 Hand Held Products, Inc. Image-stitching for dimensioning
US10140724B2 (en) 2009-01-12 2018-11-27 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10203402B2 (en) 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10247547B2 (en) 2015-06-23 2019-04-02 Hand Held Products, Inc. Optical pattern projector
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US10393506B2 (en) 2015-07-15 2019-08-27 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US11029762B2 (en) 2015-07-16 2021-06-08 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000926A1 (fr) * 1998-06-30 2000-01-06 Intel Corporation Procede et appareil de capture d'images stereoscopiques utilisant des capteurs d'images
US20030110610A1 (en) * 2001-11-13 2003-06-19 Duquette David W. Pick and place machine with component placement inspection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000000926A1 (fr) * 1998-06-30 2000-01-06 Intel Corporation Procede et appareil de capture d'images stereoscopiques utilisant des capteurs d'images
US20030110610A1 (en) * 2001-11-13 2003-06-19 Duquette David W. Pick and place machine with component placement inspection

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10845184B2 (en) 2009-01-12 2020-11-24 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US10140724B2 (en) 2009-01-12 2018-11-27 Intermec Ip Corporation Semi-automatic dimensioning with imager on a portable device
US8988590B2 (en) 2011-03-28 2015-03-24 Intermec Ip Corp. Two-dimensional imager with solid-state auto-focus
US9253393B2 (en) 2011-03-28 2016-02-02 Intermec Ip, Corp. Two-dimensional imager with solid-state auto-focus
WO2013076583A3 (fr) * 2011-11-25 2013-12-27 Universite De Strasbourg Procédé de vision active pour système d'imagerie stéréo et système correspondant
US10467806B2 (en) 2012-05-04 2019-11-05 Intermec Ip Corp. Volume dimensioning systems and methods
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9007368B2 (en) 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US9292969B2 (en) 2012-05-07 2016-03-22 Intermec Ip Corp. Dimensioning system calibration systems and methods
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
US10635922B2 (en) 2012-05-15 2020-04-28 Hand Held Products, Inc. Terminals and methods for dimensioning objects
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US10805603B2 (en) 2012-08-20 2020-10-13 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US10908013B2 (en) 2012-10-16 2021-02-02 Hand Held Products, Inc. Dimensioning system
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9784566B2 (en) 2013-03-13 2017-10-10 Intermec Ip Corp. Systems and methods for enhancing dimensioning
EP2966595A1 (fr) * 2013-03-13 2016-01-13 Intermec IP Corp. Systèmes et procédés permettant d'améliorer le dimensionnement, par exemple celui de volumes
US9080856B2 (en) 2013-03-13 2015-07-14 Intermec Ip Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
EP2779027A1 (fr) * 2013-03-13 2014-09-17 Intermec IP Corp. Systèmes et procédés permettant d'améliorer le dimensionnement, par exemple celui de volumes
US10228452B2 (en) 2013-06-07 2019-03-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US10203402B2 (en) 2013-06-07 2019-02-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US10240914B2 (en) 2014-08-06 2019-03-26 Hand Held Products, Inc. Dimensioning system with guided alignment
US10859375B2 (en) 2014-10-10 2020-12-08 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10402956B2 (en) 2014-10-10 2019-09-03 Hand Held Products, Inc. Image-stitching for dimensioning
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10810715B2 (en) 2014-10-10 2020-10-20 Hand Held Products, Inc System and method for picking validation
US10121039B2 (en) 2014-10-10 2018-11-06 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US10134120B2 (en) 2014-10-10 2018-11-20 Hand Held Products, Inc. Image-stitching for dimensioning
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US10218964B2 (en) 2014-10-21 2019-02-26 Hand Held Products, Inc. Dimensioning system with feedback
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US10393508B2 (en) 2014-10-21 2019-08-27 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US10593130B2 (en) 2015-05-19 2020-03-17 Hand Held Products, Inc. Evaluating image values
US11906280B2 (en) 2015-05-19 2024-02-20 Hand Held Products, Inc. Evaluating image values
US11403887B2 (en) 2015-05-19 2022-08-02 Hand Held Products, Inc. Evaluating image values
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US10247547B2 (en) 2015-06-23 2019-04-02 Hand Held Products, Inc. Optical pattern projector
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US10612958B2 (en) 2015-07-07 2020-04-07 Hand Held Products, Inc. Mobile dimensioner apparatus to mitigate unfair charging practices in commerce
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
US11353319B2 (en) 2015-07-15 2022-06-07 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US10393506B2 (en) 2015-07-15 2019-08-27 Hand Held Products, Inc. Method for a mobile dimensioning device to use a dynamic accuracy compatible with NIST standard
US11029762B2 (en) 2015-07-16 2021-06-08 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10747227B2 (en) 2016-01-27 2020-08-18 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10872214B2 (en) 2016-06-03 2020-12-22 Hand Held Products, Inc. Wearable metrological apparatus
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US10417769B2 (en) 2016-06-15 2019-09-17 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning

Also Published As

Publication number Publication date
GB0515915D0 (en) 2005-09-07

Similar Documents

Publication Publication Date Title
WO2007015059A1 (fr) Procédé et système pour saisie de données en trois dimensions
US10902668B2 (en) 3D geometric modeling and 3D video content creation
CN106548489B (zh) 一种深度图像与彩色图像的配准方法、三维图像采集装置
US7103212B2 (en) Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
EP1649423B1 (fr) Procede et systeme de reconstruction de surface tridimensionnelle d'un objet
US20130106833A1 (en) Method and apparatus for optical tracking of 3d pose using complex markers
JP6596433B2 (ja) 2つのカメラからの曲線の集合の構造化光整合
WO2004044522A1 (fr) Procede et dispositif permettant de mesurer une forme tridimensionnelle
CN113505626B (zh) 一种快速三维指纹采集方法与系统
Tabata et al. High-speed 3D sensing with three-view geometry using a segmented pattern
US20220092345A1 (en) Detecting displacements and/or defects in a point cloud using cluster-based cloud-to-cloud comparison
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
Maurice et al. Epipolar based structured light pattern design for 3-d reconstruction of moving surfaces
JP2004077290A (ja) 3次元形状計測装置および方法
Li et al. A camera on-line recalibration framework using SIFT
KR100872103B1 (ko) 대상의 각도 포즈를 결정하는 방법 및 장치
JPH0814858A (ja) 立体物データ取得装置
Adán et al. Disordered patterns projection for 3D motion recovering
Vuori et al. Three-dimensional imaging system with structured lighting and practical constraints
JP2006058092A (ja) 3次元形状測定装置および方法
JP2005292027A (ja) 三次元形状計測・復元処理装置および方法
JP2916319B2 (ja) 三次元形状測定装置
JP6837880B2 (ja) 画像処理装置、画像処理システム、画像処理方法、およびプログラム
Matabosch et al. A refined range image registration technique for multi-stripe laser scanner
Katai-Urban et al. Stereo Reconstruction of Atmospheric Cloud Surfaces from Fish-Eye Camera Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06765045

Country of ref document: EP

Kind code of ref document: A1