EP1194881A1 - Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide - Google Patents

Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide

Info

Publication number
EP1194881A1
EP1194881A1 EP99950370A EP99950370A EP1194881A1 EP 1194881 A1 EP1194881 A1 EP 1194881A1 EP 99950370 A EP99950370 A EP 99950370A EP 99950370 A EP99950370 A EP 99950370A EP 1194881 A1 EP1194881 A1 EP 1194881A1
Authority
EP
European Patent Office
Prior art keywords
transformation
image
post
images
image set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP99950370A
Other languages
German (de)
English (en)
Other versions
EP1194881A4 (fr
Inventor
Dan Albeck
Amnon Shashua
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yissum Research Development Co of Hebrew University of Jerusalem
Hexagon Metrology Israel Ltd
Original Assignee
Yissum Research Development Co of Hebrew University of Jerusalem
Cognitens Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yissum Research Development Co of Hebrew University of Jerusalem, Cognitens Ltd filed Critical Yissum Research Development Co of Hebrew University of Jerusalem
Publication of EP1194881A1 publication Critical patent/EP1194881A1/fr
Publication of EP1194881A4 publication Critical patent/EP1194881A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • the invention relates generally to the fields of photogrammetry and image processing, and more particularly to systems and methods for generating reconstructions of surface elements of three- dimensional objects in a scene from a set of two-dimensional images of the scene.
  • the invention specifically provides an arrangement for facilitating such reconstruction using images recorded by, for example, a rig supporting a set of optical sensors, such as cameras, which record the scene. following a non-rigid transformation of the rig after the rig has been calibrated.
  • Reconstruction of surface features of three-dimensional objects in a scene from a set of two- dimensional images of the scene has been the subject of research since the late 19th century. Such reconstruction may be useful in a number of areas, including obtaining information about physical (three-dimensional) characteristics of objects in the scene, such as determination of the actual three- dimensional shapes and volumes of the objects. Reconstruction has also recently become particularly important in, for example, computer vision and robotics.
  • the geometric relation between three-dimensional objects and the images created by a simple image recorder such as a pin-hole camera (that is, a camera without a lens) is a source of information to facilitate a three-dimensional reconstruction.
  • Current practical commercial systems for object reconstruction generally rely on reconstruction from aerial photographs or from satellite images.
  • one or more cameras are used which record images from two locations, whose positions relative to a scene are precisely determinable.
  • one or more cameras mounted on an airborne platform may be used; if one camera is used, information from obj ects on the ground whose relative positions are known can be used in the reconstruction, whereas if more cameras are used.
  • the geometries of the cameras relative to each other are fixed in a known condition, which information can be used in the reconstruction.
  • the positions and orientations of the satellites can be determined with great accuracy, thereby providing the required geometrical information required for reconstruction with corresponding precision.
  • reconstruction of the desired objects shown in the images can be performed from two-dimensional photographic or video images taken from such an arrangement
  • the ext ⁇ nsic parameters are related to the external geometry or arrangement of the cameras, including the rotation and translation between the coordinate frame of one camera m relation to the coordinate frame of the second camera
  • the intrinsic parameters associated with each camera is related to the camera's internal geometry in a manner that desc ⁇ bes a transformation between a virtual camera coordinate system and the true relationship between the camera's image plane and its center of proj ection (COP)
  • the intrinsic parameters can be represented by the image's aspect ratio, the skew and the location of the pnncipal point, that is, the location of the intersection of the camera's optical axis and the image plane (Note that the camera's focal length is related to the identified intrinsic parameters, in particular the aspect ratio, and thus it
  • the values of the internal parameters are determined by a separate and independent "internal camera calibration" procedure that relies on images of specialized patterns
  • the second reconstruction method more than two views of a scene are taken and processed and the two sets of parameters are decoupled b assuming that the internal camera parameters are fixed for all views.
  • Processing to determine the values of the parameters proceeds using non- linear methods, such as recursive estimation, non-linear optimization techniques such as Levenberg-Marquardt iterations, and more recently projective geometry tools using the concept of "the absolute conic.”
  • One significant problem with the first approach (using a separate internal camera calibration step) is that even small errors in calibration lead to significant errors in reconstruction.
  • the methods for recovering the extrinsic parameters following the internal calibration are known to be extremely sensitive to minor errors in image measurements and require a relatively large field of view in order to behave properly.
  • the processing techniques are iterative based on an initial approximation, and are quite sensitive to that initial approximation.
  • the assumption that the internal camera parameters are fixed is not always a good assumption.
  • the arrangement described in the Shashua patent can be used in connection with a rig including, for example, two optical sensors, which can be directed at a scene from two diverse locations to record the set of images required for the reconstruction; alternatively, the images can be recorded using a single optical sensor which records a set of images, one image from each of the two locations.
  • the rig performs a calibration operation to generate a "projective-to-Euclidean" matrix.
  • the projective-to-Euclidean matrix relates coordinates in a projective coordinate system to coordinates in a Euclidean coordinate system to facilitate generation of a Euclidean reconstruction from the projective reconstruction generated by the arrangement described in the Shashua patent.
  • Calibration generally requires use of "control points" in a scene that is used for the calibration operation whose Euclidean positions relative to each other are known.
  • Euclidean information in the form of pre-measured control points or Euclidean constraints such as distances between points in the scene or angles between lines or edges in the scene need to be known.
  • Another approach avoids the requirement of having Euclidean information, but does require multiple overlapping images recorded by moving of a single optical sensor or rig of optical sensors.
  • the projective-to-Euclidean matrix is preserved across the transformation, the same projective-to-Euclidean matrix can be used during reconstruction after the transformation as was generated during the calibration operation before the transformation.
  • the transformation is non-rigid, for example, if there is a change in focal length in connection with an optical sensor, which can facilitate focusing onto the scene from the new position, or if the optical sensors have been tilted or panned relative to another to direct them at a desired portion of the scene, or if a change has occurred in connection with the aspect ratio of an optical sensor, the projective-to-Euclidean matrix after the transformation will be different from that used before the transformation, which would necessitate performing another calibration operation.
  • a problem arises, however, that control points may not be available in the scene as recorded following the non-rigid transformation.
  • the invention provides anew and improved system and method for facilitating reconstruction of surface elements of three-dimensional objects in a scene from a set of two-dimensional images of the scene following a non-rigid transformation, without requiring control points whose Euclidean positions are known following the non-rigid transformation, and without requiring multiple images be recorded by respective optical sensors before and after the non-rigid transformation.
  • the invention provides an arrangement for use in connection with a system that generates a Euclidean representation of surface elements of objects of a scene, from a projective reconstruction, following a non-rigid transformation in connection with optical sensors which record the images, using a projective-to-Euclidean matrix generated before the non-rigid transformation.
  • the arrangement determines changes which occur when a non-rigid transformation occurs in connection with the optical sensors, when the centers of projection of the optical sensors are fixed in relation to each other, in relation to changes in the positions of the epipoles on the respective images as recorded after the non-rigid transformation.
  • the arrangement first determines respective relationships between the coordinates of the epipoles before and after the non-rigid transformation and, using those relationships and the set of images recorded after the non-rigid transformation, a set of processed images essentially undoing the non-rigid aspects of the non-rigid transformation is generated.
  • the projective representation is generated, and, using projective representation the projective-to-Euclidean matrix generated prior to the non-rigid transformation, the Euclidean representation is generated.
  • the non-rigid transformation can include, for example, changes in focal length of one or more of the optical sensors which are used to record the images to facilitate focusing on object(s) in the scene, tilting and/or panning of one or more of the optical sensors, and changing the aspect ratio of the image(s) as recorded by respective optical sensors.
  • a non-rigid transformation may include, for example, changes in both the focal length(s) of one or more of the optical sensors, as well as a pan, or lateral movement of one or more of the optical sensors 12(s), and, with four images in each set, a non-rigid transformation may also include tilting as well as panning.
  • FIG. 1 schematically depicts a system including an omni-configurational rig and a Euclidean reconstruction generator for reconstructing three-dimensional surface features of objects in a scene from a plurality of two-dimensional images of the scene, constructed in accordance with the invention
  • FIG. 2 schematically depicts a sensor useful in the omni-configurational rig depicted in FIG. 1 ;
  • FIG. 3 is a flowchart depicting operations performed by the Euclidean reconstruction generator depicted in FIG. 1, in reconstructing three-dimensional surface features of objects in the scene from a plurality of two-dimensional images of the scene; and
  • FIG. 4 schematically depicts a second embodiment of a rig useful in the system depicted in FIG. 1, described herein as a "catadioptric" rig.
  • FIG. 1 schematically depicts a system 10 including an omni-configurational rig and a Euclidean reconstruction generator 1 for reconstructing three-dimensional surface features of obj ects in a scene from a plurality of two-dimensional images of the scene, constructed in accordance with the invention.
  • the system 10 generates reconstruction of surface features of objects in following a non-rigid transformation in connection with the rig, if the parameters of system 10 were properly determined prior to the transformation, even if no elements of the scene following the transformation were in the scene prior to the transformation.
  • the rig 11 includes one or more optical sensors (in the following it will be assumed that the rig 11 includes a plurality of optical sensors) 12(1) through 12(S) (generally identified by reference numeral 12(s)) which are generally directed at a three- dimensional scene 13 for recording images thereof.
  • the optical sensors 12(s) are mounted in a support structure generally identified by reference numeral 14. Structural details of the optical sensors 12(s) used in the rig 11 will be described below in connection with FIG. 2.
  • each optical sensor 12(s) includes a lens and an image recording device such as a charge-coupled device (CCD), the lens projecting a image of the scene 13 onto a two-dimensional image plane defined by the image recording device.
  • CCD charge-coupled device
  • optical sensors 12(s) thus can contemporaneously record "S" two-dimensional images of the three-dimensional scene 13.
  • Each optical sensor 12(s) generates a respective SENSOR_s_OUTPUT sensor "s” output signal (index "s” ranging from one to S, inclusive) that is representative of the image recorded by its image recording device.
  • the System 10 further includes a rig motion control circuit 15.
  • the rig motion control circuit 15 generates signals generally identified as "RIG_MOT_CTRL” rig motion control signals, which control motion by the support 14 to facilitate recording of images of the scene 13 from a plurality of orientations, as well as to facilitate recording of images of various scenes (additional scenes not shown).
  • the rig motion control circuit 15 also generates SENSOR_s_MOT_CTRL sensor "s” motion control signals (index “s” ranging from one to S, inclusive) for controlling predetermined optical characteristics of the respective optical sensor 12(s).
  • the SENSOR_s_MOT_CTRL signal controls the positioning of the image recording device of the respective image sensor 12(s) relative to the lens to selectively facilitate focusing, and tilting and panning, as will be described below in connection with FIG. 2.
  • each optical sensor 12(s) generates a respective SENSOR_s_OUTPUT sensor "s" output signal that is representative of the image recorded by its image recording device 12(s).
  • the Euclidean reconstruction generator 16 receives the SENSOR_s_OUTPUT signals from all of the optical sensors and generates information defining a three-dimensional Euclidean reconstruction of the scene 13, as well as other scenes to which the rig motion control 15 may direct the rig 11, from sets of images as recorded by the optical censors 12(s).
  • a set of images may include two or more images of the scene.
  • the Euclidean reconstruction generator 16 can, from a set of images recorded after a non-rigid transformation in connection with the rig 11 , operate in accordance with operations described below in connection with FIG. 3 to generate Euclidean representations of surface features of object(s) representations of the scene at which the support 14 directs the optical sensors 12(s) without requiring additional calibration operations.
  • FIG. 2 schematically depicts, in section, an optical sensor 12(s) constructed in accordance with the invention.
  • the optical sensor includes a housing 20 having a circular forward opening in which a lens 21 is mounted. Behind the lens 21 is positioned a image recording device 22, such as a CCD device, which generates the SENSOR_s_OUTPUT signal for the optical sensor 12(s).
  • the image recording device 22 defines a plane which forms an image plane for the optical sensor 12(s), on which the lens 21 projects an image.
  • the position and orientation of the image recording device 22 relative to the lens 21 is controlled by a motor 23, which, in turn, is controlled by the SENSOR s MOT CTRL sensor "s" motor control signal generated by the rig motion control circuit 15.
  • the lens 21 will be mounted in the housing 20 relatively rigidly, that is, in a manner such that the position and orientation of the lens 21 is fixed relative to the support 14.
  • the motor 23 can enable the image recording device 22 to move relative to the lens 21.
  • the respective motors 23 thereof can enable the image recording device 22 to, for example, move backwards and forwards in the sensor 12(s) relative to the lens, that is, closer to or farther away from the lens 21 , during the transformation, which can facilitate focusing of the image of the scene cast by the lens on the respective sensor's image recording device 22.
  • the respective motors 23 thereof can enable the respective image recording devices 22 to move relative to the lens 21 as described in connection with the three image sensor rig described above, and in addition to, for example, change their respective angular orientations (that is, tilting and panning) relative to the lens 21 , during the transformation.
  • the aspect ratios of the respective image recording devices can be changed during the transformation.
  • These modifications that is, focusing, tilting and panning, and changes in the aspect ratio, which are referred to above as “non-rigid parameters," can be performed individually for each respective image sensor 12(s). If any such modifications are made when, for example, the rig 11 is moved from one location to another, the modification is referred to as a "non- rigid transformation.”
  • the Euclidean reconstruction generator 16 after it has been calibrated prior to a non-rigid transformation, can, after a non-rigid transformation, generate three-dimensional Euclidean representations of surface features of object(s) in the scene 13 as recorded by optical sensors 12(s) without requiring additional calibration operations. Operations performed by the Euclidean reconstruction generator in that operation will be described in connection with the flowchart depicted in FIG. 3. By way of background, the Euclidean reconstruction generator 16 processes sets of images recorded by the optical sensors 12(s) before and after a non-rigid transformation.
  • the Euclidean reconstruction generator 16 During a calibration operation performed before a non-rigid transformation, the Euclidean reconstruction generator 16 generates a four-by-four "projective-to-Euclidean" matrix W, which, as is known in the art, is determined using the set of images recorded before the non-rigid transformation and other Euclidean information.
  • the Euclidean reconstruction generator if it is to generate a Euclidean representation of the scene before the non-rigid transformation, it can construct a projective representation using the images in the set recorded before the non-rigid transformation, and, using the projective representation and the projective-to-Euclidean matrix W, generate the Euclidean representation.
  • the Euclidean reconstruction generator 16 can generate a Euclidean representation of the scene by constructing a projective representation using the images recorded after the rigid transformation, and thereafter applying the same projective-to-Euclidean matrix W to generate the Euclidean representation.
  • the Euclidean reconstruction generator 16 cannot use the projective-to-Euclidean matrix W generated prior to the non-rigid transformation to generate a Euclidean representation from a projective representation constructed using the images in the set recorded after the non-rigid transformation.
  • the Euclidean reconstruction generator 16 can use the projective- to-Euclidean matrix W generated prior to the non-rigid transformation to generate a Euclidean representation from a projective representation if the projective representation is constructed from a set of processed images, where the processed images correspond to the images in the set recorded after the non-rigid transformation, multiplied by respective inverse collineation matrices A, ' .
  • Equation (2) relates the coordinates between images "i” and “j” for all points P (not separately shown) in the scene 13 which are projected onto the "i-th” image as points p, and onto the "j-th” image as points p,.
  • a fundamental matrix F y exists relating the coordinates of the points projected from the scene 13 onto each respective pair of images "i” and "j.”
  • the values of the matrix elements for the fundamental matrices as among the various pairs of images may be different. Since the centers of projection of the various optical sensors 12(s) will be in the same relative positions before and after the transformation of the rig 11 , the fundamental matrices for the pairs of images before and after the transformation will be related by
  • each image will have a center of projection, which, in turn, will be projected as an epipole onto the image planes of the other images in the respective set.
  • the center of projection of the "i-th" image is, in turn, projected onto the image plane of the "j-th” image as epipole e y before the non-rigid transformation and epipole e after the non-rigid transformation, with each epipole e y and e',. having coordinates in the respective image plane.
  • the points of epipoles e y and e',. may not actually be in the respective images, but they will be in the image planes therefor.
  • the collineation matrix A for each j 0 is determined by the four pairs of matching
  • equations (3) through (5) represent the fact that the changes in the image planes which occur when the rig 11 undergoes a non-rigid transformation, while maintaining the positions of the centers of projection of the sensors 12(s) fixed relative to each other, can be determined in relation to the changes in the positions of the epipoles on the respective images planes which occur when the rig 11 undergoes the non-rigid transformation.
  • the system 10 will perform a calibration operation to generate the projective-to-Euclidean matrix W in connection with a set of images recorded prior to a non-rigid transformation (step 100).
  • steps 100 Operations performed by the system in step 100 in determining the projective-to-Euclidean matrix W of the epipoles are known to those skilled in the art and will not be described further herein.
  • the rig motion control 15 enables the rig 11 to perform a non-rigid transformation, and in that process it can enable the optical sensors 12(s) to train on a scene and facilitate recording of images thereof (step 101).
  • the scene on which the optical sensors 12(s) are trained in step 101 can be, for example, the same scene as was used in step 100 but from a different position or orientation, overlap with the scene as was used in step 100, or be a completely different scene, that is, a scene in which none of the objects in the scene were in the scene used in step 100.
  • the Euclidean reconstruction generator 16 performs a number of steps to determine the collineations A, for the respective images recorded in step 101 and generate the images p, u for the set after the non-rigid transformation, for which the non-rigid effects of the transformation have been undone.
  • the Euclidean reconstruction generator 16 determines the coordinates in each of the image planes for the epipole e' ⁇ associated with each of the other images in the set (step 102). Thereafter, for each image "j 0 " and the coordinates of the epipoles e in the image plane associated therewith for the optical sensors 12(s) which records the respective
  • the Euclidean reconstruction generator 16 uses equation (5) to generate the collineations A, for the respective images p' recorded following the non-rigid transformation (step 103). Thereafter, the Euclidean reconstruction generator 16, using the collineations A Jo generated
  • step 103 generates the generate the set of images p, u in accordance with equation (6) (step 104).
  • the Euclidean reconstruction generator 16 uses those images to construct a projective representation of the scene after the non-rigid transformation, and, using that projective representation and the projective-to-Euclidean matrix W generated in step 100, the Euclidean representation (step 105).
  • Operations performed by the Euclidean representation generator 16 in step 105 to construct a projective representation and a Euclidean representation using the projective representation and projective-to-Euclidean matrix W are known to those skilled in the art and will not be described further herein.
  • the Euclidean reconstruction generator 16 will be able to determine the collineations A principal thereby facilitating the generation of a Euclidean representation following a non-rigid transformation.
  • Illustrative types of changes that may happen during such a non-rigid transformation include, for example, changes in focal length of one or more of the optical sensors 12(s) to facilitate focusing on object(s) in the scene, tilting or panning of one or more of the optical sensors 12(s), and changing the aspect ratio of the image(s) as recorded by respective optical sensors 12(s).
  • the Euclidean reconstruction generator 16 can perform the operations described above if the rig 11 has fewer than five images in each set.
  • the Euclidean reconstruction generator 16 can determine the collineations and generate a Euclidean representation after a non-rigid transformation in which, for example, changes can occur in both the focal length(s) of one or more of the optical sensors 12(s) and one or more of the optical sensors 12(s) undergoes a pan, or lateral movement.
  • the non- rigid transformation can also include tilting as well as panning. In either case, by allowing for changes in focal length, the system can allow for re-focusing after the transformation, which is particularly helpful if the scene 13 includes surfaces which are slanted relative to the rig 11.
  • a rig which maintains the image planes of the optical sensors such that the image plane homographies coincide can allow for all of the above-described non-rigid transformations (that is, changes in focal length, tilt and pan, and changes in aspect ratio) with three optical sensors, and for fewer transformations only two optical sensors are required.
  • FIG. 4 schematically depicts a portion of a rig, referred to herein as a "catadioptric rig" 150 which constrains the image planes of two images such that their image plane homographies coincide.
  • the catadioptric rig 150 includes a mirror arrangement 151 and a unitary optical sensor 152.
  • the mirror arrangement 151 includes a plurality of mirrors represented by thick lines identified by reference numerals 151(1) through 151 (4).
  • Mirrors 151(1) and 151 (2) are oriented to reflect light scattered from a point 154 in a scene onto respective mirrors 151(3) and 151(4), which, in turn, are oriented to reflect the light onto a single image recording device 153.
  • Respective beams of light as scattered from point 154 are represented by the dashed lines 155(1) and 155(2).
  • the image recording device 153 records images of the point 154 in separate image recording areas represented by dashed boxes 153(1) and 153(2).
  • the image recording device 153 may be similar to the image recording device 22 used in each optical sensors 12(s) described above in connection with FIG.2, and may, for example, comprise CCD (charge-coupled device) devices.
  • the optical sensor 152 may include a motor similar to motor 23 for moving the image recording device 153.
  • the optical sensor 152 used in the catadioptric rig 150 provides a signal representing the images to a Euclidean reconstruction generator (not shown) which can operate in a manner similar to that described above in connection with Euclidean reconstruction generator 16.
  • the catadioptric rig 150 has been described as having a mirror arrangement 151 that enables two images 153(1) and 153(2) to be recorded by the image recording device 153, it will be appreciated that a similar mirror arrangement may be provided to enable three images or more to be recorded by the image recording device.
  • the catadioptric rig instead of having a single, relatively large image recording device on which the images are recorded in respective image recording areas, may have a plurality of image recording devices at each of the image recording areas, with the image recording devices constrained to lie in a single image plane.
  • the Euclidean representation generator 16 generates a Euclidean representation of a scene following a non-rigid transformation by generating, from a set of images p, recorded after the non-rigid transformation, a set of images p, u for which the non-rigid aspects of the transformation have effectively been undone.
  • the Euclidean representation generator 16 can generate a Euclidean representation from a projective representation generated from a set of images p' generated after the non-rigid transformation, without the necessity of generating a set of images p, u for which the non-rigid aspect of the transformation have been undone, and making use of a projective-to-Euclidean matrix W This will be clear from the following.
  • the Euclidean representation generator 16 Prior to a non-rigid transformation, the Euclidean representation generator 16 can, during a calibration operation, generate respective three-by-four camera matrices G, for the respective "i-th" image as
  • the camera matrices G', and centers of projection c' can also be determined in a manner described in the Shashua patent. Since the sets of centers of projection c, and c', are in the same relative positions, the set of centers of projection prior to the non-rigid transformation are related to the set after the non-rigid transformation by
  • Equation (9) Five points are needed to determine the projective-to-Euclidean matrix using equation (9). since each point contributes three equations and equation (9) determines the projective-to-Euclidean W up to a scale. After the projective-to-Euclidean matrix is determined, it can be used to construct the Euclidean representation of the scene for the image set recorded after the non-rigid transformation.
  • the invention provides a number of advantages.
  • the invention provides an arrangement which facilitates Euclidean reconstruction of surface features of objects in a scene after a non-rigid transformation of the rig 11 recording the scene, without additional Euclidean information after the non-rigid transformation and without requiring multiple images be recorded by respective optical sensors before and after the non-rigid transformation.
  • the arrangement can be calibrated once in, for example, a laboratory in which Euclidean information can be readily provided, and thereafter used at other locations without requiring Euclidean information at those locations.
  • optical sensors 12(s) which record images using CCD (charge-coupled device) recorders
  • the optical sensors may comprise any convenient mechanism for recording images.
  • the system can allow for re-focusing after the transformation, which is particularly helpful if the scene 13 includes surfaces which are slanted relative to the rig 11.
  • the invention has been described in connection with a system 10 including a rig 11 on which one or more image sensors 12(s) are mounted, it will be appreciated that the images may instead be recorded by one or more image sensors held by, for example, respective operators, who direct the image sensor(s) at the scene to record images thereof.
  • a system in accordance with the invention can be constructed in whole or in part from special purpose hardware or a general purpose computer system, or any combination thereof, any portion of which may be controlled by a suitable program.
  • Any program may in whole or in part comprise part of or be stored on the system in a conventional manner, or it may in whole or in part be provided in to the system over a network or other mechanism for transferring information in a conventional manner.
  • the system may be operated and/or otherwise controlled by means of information provided by an operator using operator input elements (not shown) which may be connected directly to the system or which may transfer the information to the system over a network or other mechanism for transfe ⁇ ing information in a conventional manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

Une représentation euclidienne d'éléments superficiels d'objets d'une scène est générée à partir d'une représentation projective, à la suite d'une transformation non rigide relative à des capteurs optiques qui enregistrent les images, au moyen d'une matrice projective à euclidienne générée avant la transformation.
EP99950370A 1998-05-14 1999-05-14 Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide Withdrawn EP1194881A4 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US8550198P 1998-05-14 1998-05-14
US85501P 1998-05-14
US9635998P 1998-08-13 1998-08-13
US96359P 1998-08-13
PCT/IB1999/001226 WO1999059100A1 (fr) 1998-05-14 1999-05-14 Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide

Publications (2)

Publication Number Publication Date
EP1194881A1 true EP1194881A1 (fr) 2002-04-10
EP1194881A4 EP1194881A4 (fr) 2006-03-22

Family

ID=26772795

Family Applications (1)

Application Number Title Priority Date Filing Date
EP99950370A Withdrawn EP1194881A4 (fr) 1998-05-14 1999-05-14 Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide

Country Status (3)

Country Link
EP (1) EP1194881A4 (fr)
CA (1) CA2332010A1 (fr)
WO (1) WO1999059100A1 (fr)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877897A (en) 1993-02-26 1999-03-02 Donnelly Corporation Automatic rearview mirror, vehicle lighting control and vehicle interior monitoring system using a photosensor array
US6822563B2 (en) 1997-09-22 2004-11-23 Donnelly Corporation Vehicle imaging system with accessory control
US7655894B2 (en) 1996-03-25 2010-02-02 Donnelly Corporation Vehicular image sensing system
AU2003225228A1 (en) 2002-05-03 2003-11-17 Donnelly Corporation Object detection system for vehicle
US7526103B2 (en) 2004-04-15 2009-04-28 Donnelly Corporation Imaging system for vehicle
WO2008024639A2 (fr) 2006-08-11 2008-02-28 Donnelly Corporation Système de commande automatique de phare de véhicule
IT1398637B1 (it) 2010-02-25 2013-03-08 Rotondo Metodo e sistema per la mobilita' delle persone in un contesto urbano ed extra urbano.
GB201008281D0 (en) * 2010-05-19 2010-06-30 Nikonovas Arkadijus Indirect analysis and manipulation of objects
KR102348369B1 (ko) 2013-03-13 2022-01-10 디퍼이 신테스 프로덕츠, 인코포레이티드 외부 골 고정 장치
US10835318B2 (en) 2016-08-25 2020-11-17 DePuy Synthes Products, Inc. Orthopedic fixation control and manipulation
US11439436B2 (en) 2019-03-18 2022-09-13 Synthes Gmbh Orthopedic fixation strut swapping
US11304757B2 (en) 2019-03-28 2022-04-19 Synthes Gmbh Orthopedic fixation control and visualization
US11334997B2 (en) 2020-04-03 2022-05-17 Synthes Gmbh Hinge detection for orthopedic fixation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4003642A (en) * 1975-04-22 1977-01-18 Bio-Systems Research Inc. Optically integrating oculometer
US5598515A (en) * 1994-01-10 1997-01-28 Gen Tech Corp. System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4003642A (en) * 1975-04-22 1977-01-18 Bio-Systems Research Inc. Optically integrating oculometer
US5598515A (en) * 1994-01-10 1997-01-28 Gen Tech Corp. System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
NAYAR S K: "SPHEREO: DETERMINING DEPTH USING TWO SPECULAR SPHERES AND A SINGLE CAMERA" PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 1005, 8 November 1988 (1988-11-08), pages 245-254, XP009017945 ISSN: 0277-786X *
POLLEFEYS M., VAN GOOL L., MOONS T.: "Euclidean 3D reconstruction from stereo sequences with variable focal lengths" RECENT DEVELOPMENTS IN COMPUTER VISION, SECOND ASIAN CONFERENCE ON COMPUTER VISION, ACCV '95, 5 December 1995 (1995-12-05), - 8 December 1995 (1995-12-08) pages 405-413, XP002364251 *
See also references of WO9959100A1 *
SHASHUA A: "Omni-Rig sensors: what can be done with a non-rigid vision platform?" APPLICATIONS OF COMPUTER VISION, 1998. WACV '98. PROCEEDINGS., FOURTH IEEE WORKSHOP ON PRINCETON, NJ, USA 19-21 OCT. 1998, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 19 October 1998 (1998-10-19), pages 174-179, XP010315584 ISBN: 0-8186-8606-5 *
ZHANG Z ET AL: "SELF-MAINTAINING CAMERA CALIBRATION OVER TIME" PROCEEDINGS OF THE 1997 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. SAN JUAN, PUERTO RICO, JUNE 17 - 19, 1997, PROCEEDINGS OF THE IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, LOS ALAMIT, vol. CONF. 16, 17 June 1997 (1997-06-17), pages 231-236, XP000776513 ISBN: 0-7803-4236-4 *

Also Published As

Publication number Publication date
CA2332010A1 (fr) 1999-11-18
EP1194881A4 (fr) 2006-03-22
WO1999059100A1 (fr) 1999-11-18

Similar Documents

Publication Publication Date Title
US6094198A (en) System and method for reconstructing surface elements of solid objects in a three-dimensional scene from a plurality of two dimensional images of the scene
JP4242495B2 (ja) 画像記録装置並びにその位置及び向きの決定方法
US6023588A (en) Method and apparatus for capturing panoramic images with range data
US6507665B1 (en) Method for creating environment map containing information extracted from stereo image pairs
US5963664A (en) Method and system for image combination using a parallax-based technique
JP2874710B2 (ja) 三次元位置計測装置
US5259037A (en) Automated video imagery database generation using photogrammetry
US6870563B1 (en) Self-calibration for a catadioptric camera
JP2001346226A (ja) 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体
JP4825971B2 (ja) 距離算出装置、距離算出方法、構造解析装置及び構造解析方法。
JP2004515832A (ja) 画像系列の時空的正規化マッチング装置及び方法
EP1194881A1 (fr) Reconstruction euclidienne de scene tridimensionnelle, a partir d'images bidimensionnelles, a la suite d'une transformation non rigide
Cho et al. Resampling digital imagery to epipolar geometry
JP2000283720A (ja) 3次元データ入力方法及び装置
Nyland et al. Capturing, processing, and rendering real-world scenes
JPH10320558A (ja) キャリブレーション方法並びに対応点探索方法及び装置並びに焦点距離検出方法及び装置並びに3次元位置情報検出方法及び装置並びに記録媒体
JP3221384B2 (ja) 三次元座標計測装置
Ho et al. Using geometric constraints for fisheye camera calibration
JP3317093B2 (ja) 3次元形状データ処理装置
JP2688925B2 (ja) 立体画像表示用装置
JP2005063129A (ja) 時系列画像からのテクスチャ画像獲得方法,テクスチャ画像獲得装置,テクスチャ画像獲得プログラムおよびそのプログラムを記録した記録媒体
Chen et al. Image registration with uncalibrated cameras in hybrid vision systems
Gong et al. A robust image mosaicing technique capable of creating integrated panoramas
CN116222786B (zh) 相机阵列计算成像系统及方法
Pedersini et al. A multi-view trinocular system for automatic 3D object modeling and rendering

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010308

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB

A4 Supplementary search report drawn up and despatched

Effective date: 20060208

17Q First examination report despatched

Effective date: 20060628

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20070110