WO2021154269A1 - Déterminations de pose de caméra avec motifs - Google Patents

Déterminations de pose de caméra avec motifs Download PDF

Info

Publication number
WO2021154269A1
WO2021154269A1 PCT/US2020/015960 US2020015960W WO2021154269A1 WO 2021154269 A1 WO2021154269 A1 WO 2021154269A1 US 2020015960 W US2020015960 W US 2020015960W WO 2021154269 A1 WO2021154269 A1 WO 2021154269A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
repeating pattern
locally non
image data
image
Prior art date
Application number
PCT/US2020/015960
Other languages
English (en)
Inventor
Vijaykumar Nayak
Shaymus Jamil ALWAN
Brian R JUNG
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2020/015960 priority Critical patent/WO2021154269A1/fr
Publication of WO2021154269A1 publication Critical patent/WO2021154269A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • Cameras may be used in three-dimensional (3D) scanning systems, virtual or augmented reality (VR or AR) systems, robotics systems, assisted or autonomous driving systems, and the like.
  • the pose of a camera that is, its actual position and orientation within a frame of reference, may be used to reduce error in processing information from images captured by the camera.
  • an accurate camera pose may allow for increased accuracy in positioning an overlay graphic.
  • accuracy in object scanning or tracking may be increased with a more accurate camera pose.
  • FIG. 1 is a schematic diagram of an example system to determine relative camera pose using a locally non-repeating pattern.
  • FIG. 2 is a flowchart of an example method to determine relative camera pose using a locally non-repeating pattern.
  • FIG. 3 is a flowchart of an example method to determine relative camera pose using a locally non-repeating pattern and depth information.
  • FIG. 4 is a schematic diagram of an example system of cameras with relative poses and a locally non-repeating pattern.
  • FIG. 5 is a schematic diagram of an example system to determine relative camera pose using a locally non-repeating pattern that aids relative camera pose determination and user communication.
  • FIG. 6A is a diagram of an example locally non-repeating pattern of lines.
  • FIG. 6B is a diagram of an example locally non-repeating pattern of circles.
  • FIG. 6C is a diagram of an example locally non-repeating pattern of curved areas.
  • obtaining a relative pose between cameras may be useful to reconstruct or stitch a 3D scene accurately.
  • Calibration targets are often used to determine camera pose.
  • Relative camera pose which may also be termed relative camera alignment, may be described by six degrees of freedom, such as the camera’s position (e.g., Tx, Ty, Tz) and the camera’s orientation (e.g., Rx, Ry, Rz) relative to another camera’s position and orientation.
  • the camera’s position e.g., Tx, Ty, Tz
  • the camera’s orientation e.g., Rx, Ry, Rz
  • One type of calibration target carries a set of markers (fiducials) in a predetermined arrangement.
  • a marker arrangement may be used to derive geometries (e.g., points, lines, conics, etc.) in space.
  • a perspective transformation may be applied to a geometric primitive to arrive at a pose for a camera. This approach often relies on the marker arrangement being a ground truth that is built into the camera system. Further, the markers and arrangement thereof are often specifically designed to encourage detection of specific feature points. As such, the visual appearance of the object carrying the markers may suffer.
  • Another type of calibration target is a non-symmetric 3D target that has sufficient variation in the surface normals for unique registration. These targets tend to be non-planar and/or non-parametric, which imposes a constraint on target geometry. However, this means that a system has a separate and often loose component that is required from time to time for calibration. This may reduce storage economy and calibration may not be possible if the object is lost.
  • a locally non-repeating pattern such as a texture, may be used to determine camera-to-camera pose.
  • the locally non-repeating pattern may be carried by a target for use in relative camera pose determination.
  • Specific markers such as circular markers or ArUco codes, may be omitted.
  • a camera system does not need to have advanced knowledge of the locally non-repeating pattern.
  • the locally non-repeating pattern may have fewer constraints than markers and arrangements thereof, which may allow for greater flexibility in implementing a calibration target.
  • An object carrying a locally non-repeating pattern may have a greater aesthetic appeal than an object that carries markers.
  • a locally non-repeating pattern may be incorporated into the visual aesthetic of a product, so that the locally non-repeating pattern serves a dual purpose of aesthetic appeal and camera pose determination.
  • a locally non-repeating pattern may provide user information, such as direction or instruction to a user of the camera system. This may enable more design freedom for products or objects used in 3D camera systems.
  • FIG. 1 shows an example system 100.
  • the system 100 captures images and depth information that may be used to generate 3D data of a carrier object 102.
  • the system 100 may be a 3D scanner.
  • the carrier object 102 may be a platform to carry a target of interest, a target of interest, a calibration target, or another object that may be removed from or remain in the scene desired to be captured by the system 100.
  • a target object whose 3D data is to be captured may be placed above or set upon the carrier object 102.
  • the carrier object 102 is the target object whose 3D data is to be captured.
  • the system 100 includes a first camera 104, a second camera 106, a controller 108 connected to the cameras 104, 106, and instructions 110 executable by the controller 108.
  • the cameras 104, 106 may be aimed towards the carrier object 102, so that the carrier object 102 is partially or fully located within the fields of view of the cameras 104, 106.
  • the carrier object 102 may be planar or have other shape, such as prismatic, irregular, ovoid, spherical, etc.
  • the system 100 may be assembled and disassembled. When disassembled, its components may be stored and/or transported together.
  • the cameras 104, 106 may capture visible light, infrared (IR) light, or both to obtain images of the carrier object 102 and any other object in the fields of view of the cameras 104, 106. In this example, the cameras 104, 106 also capture depth information.
  • IR infrared
  • a camera 104, 106 may include a depth sensor.
  • a camera 104, 106 may be an RGB-D camera with an integrated depth sensor. Two-dimensional images and depth information may be related to each other by a predetermined relationship, which may be established during a pre-calibration at time of manufacture or factory testing of a camera 104, 106 or the system 100.
  • a camera 104, 106 may have intrinsic properties, such as focal length and principal point location, and an extrinsic transformation that describes a position and orientation of the camera 104, 106 in the world coordinate system.
  • a depth sensor included with the camera 104 may be pre-calibrated with intrinsic properties of a camera, such as focal length (e.g., fx, fy) and principal point location (e.g., cx, cy), and an extrinsic transformation between infrared and depth or an extrinsic transformation between color and depth.
  • a depth-relative translation and orientation may be pre-calculated and stored in the camera 104, 106. The spatial relationship between image information and depth information captured by the camera 104, 106 may be established and accessible.
  • a camera 104, 106 and related depth sensor may be separate components, and the relationship of two-dimensional images and depth information may be determined when the system 100 is manufactured, set up, or in operation.
  • Depth information may include two-dimensional coordinates each with a depth value indicative of a distance from the respective camera 104, 106.
  • a depth image may be converted to a set of three-dimensional coordinates in a world or other coordinate system. This may be referred to as a point cloud.
  • the carrier object 102 may have a surface 112 that is generally exposed to the fields of view of the cameras 104, 106, so that imagery and depth information of the carrier object 102 and surface 112 may be captured.
  • a locally non-repeating pattern 114 is disposed on the surface 112 of the carrier object 102.
  • the locally non-repeating pattern 114 may be provided on a sticker adhered to the surface 112, printed to a medium that is placed or affixed on the surface 112, etched/embossed into the surface 112, printed directly to the surface 112 (e.g., with ink or 3D printed as part of the surface), molded into the surface 112, or provided in a similar manner.
  • the locally non repeating pattern 114 may be permanently or temporarily applied to the surface 112.
  • the locally non-repeating pattern is printed to a sheet of paper that is placed on the surface.
  • the locally non-repeating pattern 114 may include a texture, a curve, an arrangement of lines, an irregular polygon, a group of polygons, a group of circles, or similar pattern that has localized randomness or pseudo-randomness.
  • the pattern 114 may be selected to have single resolvable 3D orientation in space over a range of viewpoints.
  • a suitable pattern 114 may be selected to appear unique at various scales and rotations.
  • the pattern 114 may be captured by a camera 104, 106 and then analyzed to determine the camera’s pose relative to the pattern 114.
  • poses relative to the pattern are determined for multiple cameras 104, 106, the poses of the cameras 104, 106 relative to one another may be computed. That is, the pattern 114 acts a constraint that allows for camera-to-camera poses to be determined.
  • the controller 108 may include a central processing unit (CPU), a microcontroller, a microprocessor, a processing core, a processor, a field- programmable gate array (FPGA), an application-specific integrated circuit (ASIC), or a similar device capable of executing instructions.
  • the controller 108 may cooperate with a non-transitory computer-readable medium that may be an electronic, magnetic, optical, or other physical storage device that encodes executable instructions.
  • the computer-readable medium may include, for example, random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), flash memory, a storage drive, an optical device, or similar.
  • the instructions 110 when executed by the controller 108 obtain first and second images 116, 118 of the carrier object 102 that include all or a portion of the locally non-repeating pattern 114.
  • the instructions 110 may trigger the first camera 104 to capture a first image 116 and a second camera 106 to capture a second image 118.
  • the instructions 110 obtain the locally non-repeating pattern 114 in the first and second images 116, 118.
  • Obtaining the locally non-repeating pattern 114 in an image 116, 118 may include determining a region of interest as a region in an image 116, 118 that contains all or part of the locally non repeating pattern 114.
  • a filter may be applied to enhance the locally non repeating pattern 114 in an image 116, 118.
  • the instructions 110 determine an alignment or relative pose of a camera 104, 106 with respect to another camera 104, 106 based on the locally non-repeating pattern 114 as obtained from the first and second images 116, 118.
  • Detecting the relative pose between cameras 104, 106 may include performing image registration to obtain a transformation between the pattern 114 as obtained from one image 116, 118 compares to the pattern 114 as obtained from another image 116, 118.
  • a change in orientation and displacement in six degrees of freedom may be determined, where such change makes the locally non-repeating pattern 114 in, for example, a second image 118 sufficiently overlap the pattern 114 as it appears in a first image 116.
  • the change is a transformation that defines the relative pose of the second camera 106 to the first camera 104.
  • Determining relative alignment of cameras 104, 106 may be performed in a pairwise manner. Registration of an image 116, 118 containing the locally non-repeating pattern 114 may be performed for each camera 104, 106 with respect to each other camera 104, 106. As such, the relative pose between each camera 104, 106 and each other camera 104, 106 may be determined.
  • Determining relative alignment of cameras 104, 106 may include global and/or local registration.
  • An iterative closest point (ICP) computation may be used.
  • Global or coarse registration may be performed using depth information as well as with reference to the locally non-repeating pattern 114 if the pattern 114 is also globally non-repeating.
  • Local or fine registration may be performed using depth information and image data of the locally non-repeating pattern 114.
  • a first pose for a first camera 104 may be determined relative to a second camera 106.
  • a second pose for the second camera 106 may be determined relative to the first camera 104.
  • the first and second poses define the same physical relationship.
  • the first and second poses may be combined to obtain one mutual relative pose between the cameras 104, 106.
  • the second camera’s Z-axis orientation relative to the first camera is determined to be 25 degrees and if the first camera’s Z-axis orientation relative to the second camera is determined to be -23 degrees, then these orientations may be blended to obtain 24 degrees and -24 degrees, respectively.
  • a second pose is used to check a first pose and the first pose is redetermined, or an error is raised, if the second pose differs from the first pose by a threshold amount.
  • Depth information may also be used to determine a relative alignment of cameras 104, 106.
  • Depth information of the locally non-repeating pattern 114 as obtained from an image 116, 118 may be used as a constraint in determining the transformation that defines the relative alignment.
  • Depth information may be obtained by a depth sensor, such as a depth sensor integral to the first and/or second camera 104, 106. A separate depth sensor may be used. The alignment between captured image data and depth data is known or may be determined using a suitable technique.
  • Depth information may be derived from stereoscopic analysis of image data obtained from the first and second cameras 104, 106.
  • a camera’s relative pose may be defined by the camera’s position in 3D space (e.g., Tx, Ty, Tz) and the camera’s orientation (e.g., Rx, Ry, Rz) relative to another camera’s position and orientation.
  • a camera’s pose may be determined and used in a calibration by computing and validating aligned 3D data of a captured object.
  • the calibration may be referred to as a field calibration, that is, a calibration that is performed after the system 100 is assembled and during operation of the system 100.
  • a nominal camera pose may be a pose expected when the system 100 is in use. Nominal camera poses may be chosen based on specific use cases, such as the type of target object to be scanned, lighting constrained, potential obstructions in a camera’s field of view, and the like. An actual pose may differ from a nominal pose due to various reasons, such as inaccurate setup of the system 100, movement of the camera 104, 106 overtime, vibrations in the building that houses the system 100, and so on.
  • a pose of the camera 104, 106 may be stored by the controller 108 for use in a calibration.
  • the calibration may be applied to data extracted from images of a target object to obtain accurate 3D data for the target object, so that the target object may be modelled accurately.
  • the calibration may be used when reconstructing a scene on or over the carrier object 102.
  • relative camera pose is determined using the locally non-repeating pattern 114.
  • a relative camera pose may be used to compute absolute camera pose in world space.
  • Relative camera pose may be sufficient in many examples, such as in 3D scanning applications where the absolute orientation of a scanned object may not be desired.
  • relative camera poses as determined herein, may be associated with a transform to world space. For example, a first camera may have its world-space pose determined and the first camera may be used as a datum for a second camera that has a relative pose with respect to the first camera.
  • FIG. 2 shows an example method 200 of determining relative camera pose using a locally non-repeating pattern.
  • the method 200 may be performed with any of the devices and systems described herein.
  • the method 200 may be embodied by a set of controller-executable instructions that may be stored in a non-transitory computer-readable medium.
  • the method begins at block 202.
  • a first image of a carrier object is obtained.
  • the first image may be captured by a first camera or obtained from another source, such as a memory or computer network.
  • the carrier object includes a surface on which a locally non-repeating pattern is present.
  • the first image contains a portion of or the entire locally non-repeating pattern.
  • a second image of the carrier object is obtained.
  • the second image may be captured by a second camera or obtained from another source, such as a memory or computer network.
  • the second image contains a portion of or the entire locally non-repeating pattern.
  • the first and second images are captured from different camera poses.
  • the first and second images may be visible light images, IR images, or a combination of such.
  • Blocks 204, 206 may be performed in any order or may be performed simultaneously.
  • the locally non-repeating pattern is obtained from the first image.
  • the locally non-repeating pattern is also obtained from the second image.
  • a filter may be applied to enhance the locally non-repeating pattern in an image.
  • a region of interest may be selected, where the locally non-repeating pattern is expected to be located in the region of interest.
  • detecting the locally non-repeating pattern does not rely on detecting specific features, such as markers, targets, encoded patterns (e.g., QR codes, ArUco codes, etc.), or similar.
  • first image be associated with first depth data.
  • First image data may be aligned with or registered to first depth data.
  • the second image may be associated with second depth data.
  • Second image data may be aligned with or registered to second depth data.
  • Blocks 208, 210 may be performed in any order or may be performed simultaneously. Blocks 208, 210 need not have knowledge of the specific locally non-repeating pattern used.
  • an alignment of the second camera with respect to the first camera, or vice versa is determined based on the depth information and the locally non-repeating pattern as obtained from the first and second images. Registration with image data and depth data may be performed on the first and second images to align image data in 3D space so that the respective instances of the locally non-repeating pattern coincide.
  • the transform to attain such alignment may be an output of registration and may be taken as the relative pose of the second camera with respect to the first camera.
  • the inverse of the transform may be considered the relative pose of the first camera with respect to the second camera.
  • Blocks 208, 210, and 212 may be performed as part of the same image registration process.
  • a calibration may be generated with reference to the determined camera pose.
  • Homography information of one of the cameras e.g., a relation between 2D image coordinates and 3D world coordinates
  • one camera’s pose may be considered to be the basis for a world coordinate system.
  • the calibration may map image coordinates captured by the cameras to world coordinates.
  • the calibration may be referenced when generating 3D data of a target object from captured 2D images.
  • the method 200 ends at block 214.
  • the method 200 may be performed prior to the capture of a desired target object.
  • the method 200 may be repeated continuously, regularly, or periodically while a system is in operation, provided that the carrier object bearing the locally non-repeating pattern is within the scene.
  • FIG. 3 shows an example method 300 of determining relative camera pose using a locally non-repeating pattern and depth information.
  • the method 300 may be performed with any of the devices and systems described herein.
  • the method 300 may be embodied by a set of controller-executable instructions that may be stored in a non-transitory computer-readable medium. The method begins at block 302.
  • images are of a carrier object bearing a locally non repeating pattern are captured.
  • Images are captured from cameras having different poses.
  • a first image may be captured by a first camera and a second image may be captured by a second camera, where the second camera has a different position and orientation to the first camera and has an overlapping field of view with the first camera.
  • the captured images contain a portion of or the entire locally non-repeating pattern.
  • the images may be visible light images, IR images, or a combination of such.
  • depth information of the carrier object is captured.
  • Depth information may be obtained by the same device as an image, such as with an RGB-D camera. Depth information may be obtained for any of the images captured. The depth information related to an image is aligned to that image.
  • the depth data is preprocessed. Noise may be reduced or removed. This may include average depth data over time, filtering depth data (e.g., a bilateral filter), or similar operation to reduce the effects of environmental or sensor-related error. Depth data may be cleaned in the spatial domain, time domain, or both.
  • the captured color or IR image is enhanced. This may include applying a filter, such as a sharpness filter or edge detection filter, adjusting contrast or brightness, or similar.
  • a filter such as a sharpness filter or edge detection filter, adjusting contrast or brightness, or similar.
  • the image data and related depth data is combined.
  • the image data may be registered to the depth data using a suitable technique.
  • An association of depth information and image information of the locally non-repeating pattern is obtained for each captured visible/IR image.
  • each association of depth and image information is converted to a point cloud that contains image information of the locally non- repeating pattern. That is, x, y coordinates of depth and image information are converted to a point cloud.
  • a point cloud image may include color/IR information of the locally non-repeating pattern at x, y, z coordinates in a 3D coordinate system, such as the local coordinate system of the camera that provided the image and depth information
  • global registration may be performed with the point clouds.
  • a region of interest of a point cloud may be extracted to remove unwanted information and registration may be performed on regions of interest.
  • Global registration may reference depth information of the point clouds.
  • Global registration may also reference image information of the point clouds, where such image information is obtained from the locally non-repeating pattern if the pattern is also globally non-repeating.
  • Local registration is performed with the point clouds and a region of interest may be considered.
  • Local registration may reference both depth information of the point clouds as well image information of the point clouds, where such image information is obtained from the locally non-repeating pattern.
  • Initial pose data may be used as a starting point for registration.
  • Initial pose data may be determined at time of manufacture, such as for a system that is factory assembled. Initial pose data may be from an earlier determination of pose data. When suitable initial pose data is available, global registration may be skipped if the initial pose data provides similar coarse alignment as global registration.
  • Registration at blocks 316, 318 may be based on an iterative alignment technique, such as ICP, with color/IR information as a constraint.
  • Registration may be performed in a pairwise manner. Pairs of point clouds from different pairs of cameras may be registered until sufficient combinations have been processed. Sufficient combinations may be a set of combinations that allows each camera to compute its pose relative to each other camera by way of a transformation or series of transformations.
  • the method 300 ends at block 320.
  • the method 300 may be performed prior to the capture of a desired target object.
  • the method 300 may be repeated continuously, regularly, or periodically while a system is in operation, provided that the carrier object bearing the locally non-repeating pattern is within the scene.
  • pairs of point clouds may be selecting based on degree of overlap of fields of view. For example, in a system with three cameras 400, 402, 404, there are three possible unique pairings. Cameras 400 and 402 may have 60% overlap of field of view, cameras 402 and 404 may have 50% overlap of field of view, and cameras 400 and 404 may have 20% overlap of field of view. Accordingly, registration may be performed for two pairs having greatest overlap: cameras 400 and 402 and cameras 402 and 404. The point clouds obtained from image and depth data captured by these pairs of cameras may undergo registration to obtain respective transformations.
  • a transformation obtained from the registration of the point clouds of cameras 400 and 402 resolves the relative pose 410 of camera 400 to 402
  • a transformation obtained from the registration of the point clouds of cameras 402 and 404 resolves the relative pose 414 of camera 404 to camera 402.
  • the relative pose of camera 404 to camera 400 is not computed, as both cameras 400 and 404 have relative poses with respect to the pose 412 of camera 402. In this way, error in pose determination that may stem from attempts to register point clouds with little overlap may be avoided.
  • FIG. 5 shows an example system 500.
  • the system 500 is similar to the system 100 and only differences will be described in detail.
  • the system 100 may be referenced for further description.
  • the system 500 is to capture images of a user’s feet to generate 3D models of the feet to allow for customized orthopedic footwear to be created for the user.
  • a locally non repeating pattern is provided to serve the dual purpose of relative camera pose determination and user communication. Communication to a user may be by way of human-intelligible information that is inherent to the non-repeating pattern.
  • the system 500 may include a plurality of cameras 502 secured together by respective arms 504.
  • the arms 504 position the cameras 502 around and above a planar platform 506.
  • the cameras 502 are aimed centrally downwards toward the platform.
  • the cameras 502 include depth sensors and may be RGB-D cameras.
  • the platform 506 may include guide markings 512, such as an outline of a foot, to inform the user where to stand to have their feet scanned.
  • Each guide marking 512 may be in the shape of a foot and may also be shaped as a locally non-repeating pattern.
  • a guide marking 512 communicates to the user (/.e., where to stand) and provides a locally non-repeating pattern for relative camera pose determination.
  • the guide marking 512 serves a dual purpose with one cohesive shape.
  • the system 500 may further include a controller 520 and memory 522, such as a computer-readable medium, connected to the controller 520.
  • the memory 522 may store executable instructions 524 to carry out functionality described herein.
  • the memory 522 may store relevant data 526, such as relative camera poses, calibration data, and the like.
  • the controller 520 may be connected to the cameras 502 and may execute the instructions 524 to capture images and depth data of the platform 506.
  • the instructions 524 may further determine a relative camera pose for pairs of cameras 502, as discussed elsewhere herein.
  • FIG. 1 shows an example curved locally non-repeating pattern.
  • FIG. 5 shows another example curved locally non-repeating pattern.
  • FIGs. 6A-6C show additional example local non-repeating patterns.
  • FIG. 6A shows a grouping of lines irregularly angled with respect to each other, and which may have a spacing that is sufficient low so as to be considered a texture.
  • FIG. 6B shows a pattern of overlapping circles.
  • FIG. 6C shows a grouping of curved areas that may have color or texture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Un système donné à titre d'exemple comprend une première caméra pour capturer des premières données d'image et des premières données de profondeur d'un objet qui comprend un motif localement non répétitif, une seconde caméra pour capturer des secondes données d'image et des secondes données de profondeur de l'objet qui comprend le motif localement non répétitif, et un dispositif de commande pour enregistrer les secondes données d'image et les secondes données de profondeur dans les premières données d'image et les premières données de profondeur pour obtenir une pose de la seconde caméra par rapport à la première caméra.
PCT/US2020/015960 2020-01-30 2020-01-30 Déterminations de pose de caméra avec motifs WO2021154269A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2020/015960 WO2021154269A1 (fr) 2020-01-30 2020-01-30 Déterminations de pose de caméra avec motifs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2020/015960 WO2021154269A1 (fr) 2020-01-30 2020-01-30 Déterminations de pose de caméra avec motifs

Publications (1)

Publication Number Publication Date
WO2021154269A1 true WO2021154269A1 (fr) 2021-08-05

Family

ID=77078419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/015960 WO2021154269A1 (fr) 2020-01-30 2020-01-30 Déterminations de pose de caméra avec motifs

Country Status (1)

Country Link
WO (1) WO2021154269A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100276A (zh) * 2022-05-10 2022-09-23 北京字跳网络技术有限公司 处理虚拟现实设备的画面图像的方法、装置及电子设备
US20240037784A1 (en) * 2022-07-29 2024-02-01 Inuitive Ltd. Method and apparatus for structured light calibaration

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120063672A1 (en) * 2006-11-21 2012-03-15 Mantis Vision Ltd. 3d geometric modeling and motion capture using both single and dual imaging
US20160260260A1 (en) * 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation
US10033985B2 (en) * 2007-05-22 2018-07-24 Apple Inc. Camera pose estimation apparatus and method for augmented reality imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120063672A1 (en) * 2006-11-21 2012-03-15 Mantis Vision Ltd. 3d geometric modeling and motion capture using both single and dual imaging
US10033985B2 (en) * 2007-05-22 2018-07-24 Apple Inc. Camera pose estimation apparatus and method for augmented reality imaging
US20160260260A1 (en) * 2014-10-24 2016-09-08 Usens, Inc. System and method for immersive and interactive multimedia generation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100276A (zh) * 2022-05-10 2022-09-23 北京字跳网络技术有限公司 处理虚拟现实设备的画面图像的方法、装置及电子设备
US20240037784A1 (en) * 2022-07-29 2024-02-01 Inuitive Ltd. Method and apparatus for structured light calibaration

Similar Documents

Publication Publication Date Title
CN110276808B (zh) 一种单相机结合二维码测量玻璃板不平度的方法
JP6507730B2 (ja) 座標変換パラメータ決定装置、座標変換パラメータ決定方法及び座標変換パラメータ決定用コンピュータプログラム
US8090194B2 (en) 3D geometric modeling and motion capture using both single and dual imaging
KR101791590B1 (ko) 물체 자세 인식장치 및 이를 이용한 물체 자세 인식방법
JP5839929B2 (ja) 情報処理装置、情報処理システム、情報処理方法及びプログラム
CN104335005B (zh) 3d扫描以及定位系统
KR101901586B1 (ko) 로봇 위치 추정 장치 및 그 방법
US20170011524A1 (en) Depth mapping based on pattern matching and stereoscopic information
CN112097689A (zh) 一种3d结构光系统的标定方法
EP2437026A1 (fr) Procédé et système pour produire une estimation tridimensionnelle et de distance inter-planaire
Takimoto et al. 3D reconstruction and multiple point cloud registration using a low precision RGB-D sensor
US20130127998A1 (en) Measurement apparatus, information processing apparatus, information processing method, and storage medium
US20160044301A1 (en) 3d modeling of imaged objects using camera position and pose to obtain accuracy with reduced processing requirements
WO2008062407B1 (fr) Modélisation géométrique en 3d et création de contenu vidéo en 3d
JP7339316B2 (ja) ビジョンシステムにより画像特徴におけるエッジと法線を同時に考慮するためのシステム及び方法
KR20160003776A (ko) 자세 추정 방법 및 로봇
US7140544B2 (en) Three dimensional vision device and method, and structured light bar-code patterns for use in the same
WO2021154269A1 (fr) Déterminations de pose de caméra avec motifs
EP3069100A2 (fr) Système d'homogénéisation et de collimation pour un luminaire à diodes électroluminescentes
JP6282377B2 (ja) 3次元形状計測システムおよびその計測方法
CN113505626A (zh) 一种快速三维指纹采集方法与系统
JP2007508557A (ja) 三次元物体を走査するための装置
CN102881040A (zh) 一种数码相机移动拍摄三维重建方法
Kruger et al. In-factory calibration of multiocular camera systems
US20220335649A1 (en) Camera pose determinations with depth

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20916968

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20916968

Country of ref document: EP

Kind code of ref document: A1