WO2020076026A1 - Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé - Google Patents

Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé Download PDF

Info

Publication number
WO2020076026A1
WO2020076026A1 PCT/KR2019/013099 KR2019013099W WO2020076026A1 WO 2020076026 A1 WO2020076026 A1 WO 2020076026A1 KR 2019013099 W KR2019013099 W KR 2019013099W WO 2020076026 A1 WO2020076026 A1 WO 2020076026A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
images
geometry
reflectometer
reconstructing
Prior art date
Application number
PCT/KR2019/013099
Other languages
English (en)
Korean (ko)
Inventor
김민혁
남길주
Original Assignee
한국과학기술원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020190120410A external-priority patent/KR102287472B1/ko
Application filed by 한국과학기술원 filed Critical 한국과학기술원
Priority to EP19813220.1A priority Critical patent/EP3664039B1/fr
Priority to US16/622,234 priority patent/US11380056B2/en
Publication of WO2020076026A1 publication Critical patent/WO2020076026A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo

Definitions

  • the present invention relates to a three-dimensional object acquisition technology, and more specifically, artificial lighting, for example, the shape information and the surface reflectometer information (Bidirectional Reflectance Distribution Function, of the three-dimensional object from a photograph or image taken with the flash on) BRDF) is a method and apparatus for measuring or obtaining.
  • artificial lighting for example, the shape information and the surface reflectometer information (Bidirectional Reflectance Distribution Function, of the three-dimensional object from a photograph or image taken with the flash on) BRDF) is a method and apparatus for measuring or obtaining.
  • SVBRDF spatially-varying bidirectional reflectance distribution functions
  • One embodiment technique for capturing reflections from known 3D geometry has introduced a method of capturing SVBRDF of known 3D objects that contain clustered basis reflections. Per-texel reflection is progressively improved with linear blending. Another embodiment of the technique proposed an SVBRDF acquisition method that jointly optimizes the reflection basis and blend weights for known 3D geometry, which finds the smallest basis reflection and then gently blends them.
  • these techniques require a commercial 3D scanner to accurately capture the input 3D geometry.
  • One embodiment technique for reflection capture limited to 2D planar geometry is an efficient SVBRDF capture method that uses a LCD screen and camera to limit the range of angular reflection samples, as well as two flash / flashless reflections for specific cases of stationary material. An acquisition method was proposed, and a larger region can be synthesized from a small reconstruction through this method.
  • Another embodiment technique uses a smartphone camera from various view-points to capture the appearance of adjacent planar objects, and the light source provides active illumination where normals and reflections are estimated.
  • Another embodiment technology proposed a portable system consisting of a smart phone camera, a handheld linear light source and a custom BRDF chart, the technology taking a short video of the target object along the BRDF chart while moving the handheld light tube, Recover SVBRDF from a linear combination of reference BRDFs.
  • Another embodiment technique presents an acquisition setup similar to a method focused on obtaining a planar art painting reflection.
  • Other acquisition systems that capture high quality SVBRDF on a flat surface rely on more sophisticated hardware. For example, using a computer-controlled LED light to place a sample on a small dome to obtain both reflectance and normal at the microscope scale at the same time, different systems have many different light-camera combinations, linear light reflection measurements, or condenser lenses. Includes a 4-axis spherical gantry for sampling. These acquisition methods are limited to near-flat objects.
  • Shaded normals are often used to improve geometry detail by assuming reflectance, which only has diffusion throughout the object.
  • One embodiment technique for 3D reconstruction assuming only diffuse reflection is obtained using base structure estimated using SfM (Structural From Motion) and Multiview Stereo (MVS), and then assuming diffuse reflection, using the estimated surface normal geometry Update it.
  • Another embodiment technique uses Kinect Fusion to obtain the indicated distance function of the surface, and refines the distance function using surface shading cues.
  • Another embodiment technique uses two mobile devices as a camera and a light source, respectively, and takes multiple images at fixed viewpoints under various light directions to reconstruct the surface from the metering stereo.
  • Other recent methods have further demonstrated the use of smartphone cameras or large scenes to capture the 3D shape of the objects, which are based on SfM and MVS technologies. All of these methods cannot recover SVBRDF information, assuming that the surface reflection of the reconstructed object only diffuses.
  • the technique of one embodiment has created a structure similar to an LED arm that orbits rapidly to generate a continuous spherical light in a harmonic pattern, and another embodiment technique uses two mechanical arms using a phase shift pattern for 3D geometry.
  • a spherical gantry with a projector-camera pair was created, and similar dome structures of several cameras were presented.
  • This structure uses structured light patterns for 3D geometry and exhibits reflections with bidirectional texture functions (BTF).
  • One embodiment technique further estimates SVBRDF based on 3D geometry by making a multi-light device consisting of 72 LED lights on a circle board to obtain 3D geometry by combining SfM and metering stereo, but using this information It does not refine the geometry or surface normals.
  • Another embodiment technique solves the inverse rendering problem by using an environment map for estimating spherical illumination by capturing an optical probe.
  • Another embodiment technique uses a mechanical rotating stage to capture a sequence of videos over a thousand frames, which requires high density sampling with at least two distinct changes in illumination per vertex (or vertex).
  • Another embodiment technique uses spherical illumination or IR illumination from a depth camera, as well as relying on depth information using Kinect Fusion.
  • SVBRDF acquisition method that can capture the polarized appearance of diffuse reflections and specular reflections using high-resolution normals that rely on the input geometry of structured lighting.
  • artificial lighting for example, a method and apparatus for measuring or obtaining the shape information and surface reflectometer information (BRDF) of a three-dimensional object from a photograph or image taken with the flash on to provide.
  • BRDF shape information and surface reflectometer information
  • a 3D object acquisition method includes receiving a plurality of images of a 3D object photographed by a camera; Reconstructing spatially changing surface reflectometer information for the 3D object based on the received plurality of images; Estimating a shadow normal for the 3D object based on the reconstructed spatially changing surface reflectometer information; And obtaining 3D geometry for the 3D object based on the estimated shading normal.
  • the reconstructing step estimates base surface reflectometer information for the 3D object based on the received plurality of images, and spatially changes the weight based on the estimated base surface reflectometer information and a spatially varying weight map.
  • the changing surface reflectometer information can be reconstructed.
  • the method for acquiring a 3D object further includes acquiring a base geometry of the 3D object based on external parameters of the camera and the plurality of images, and the reconstructing step includes the plurality of Based on the image and the base geometry, it is possible to reconstruct the spatially changing surface reflectometer information.
  • the reconstructing step and the estimating step reconstruct the spatially changing surface reflectometer information using an inverse rendering technique that satisfies a preset image forming model for the camera, and estimate the shading normal can do.
  • the acquiring step may acquire the three-dimensional geometry using a Poisson surface reconstruction technique.
  • the reconstructing step updates the spatially changing surface reflectometer information based on the obtained 3D geometry and the plurality of images, and the estimating step is based on the updated spatially changing surface reflectometer information
  • the acquiring step may update the 3D geometry based on the updated shading normal.
  • a plurality of images of the 3D object photographed while artificial lighting is on may be received.
  • a 3D object acquisition method includes obtaining a 3D point cloud for the 3D object from the plurality of images and the pose of the camera using multiview stereo; Generating a base mesh for the 3D object based on the 3D point cloud and Poisson surface reconstruction technique; And subdividing the base mesh to obtain initial geometry for the 3D object, and the reconstructing step includes the spatially changing surface reflectometer information based on the plurality of images and the initial geometry. Can be reconstructed.
  • a method for acquiring a 3D object includes receiving a plurality of images of a 3D object photographed by a camera while artificial lighting is on; And acquiring spatially changing surface reflectometer information and 3D geometry for the 3D object based on the received plurality of images and Poisson surface reconstruction technique.
  • the acquiring may include reconstructing the spatially changing surface reflectometer information based on the received plurality of images; Estimating a shadow normal for the 3D object based on the reconstructed spatially changing surface reflectometer information; And acquiring the 3D geometry based on the estimated shading normal and the Poisson surface reconstruction technique.
  • An apparatus for acquiring a 3D object includes a receiver configured to receive a plurality of images of a 3D object photographed by a camera; A reconstruction unit reconstructing spatially changing surface reflectometer information for the 3D object based on the received plurality of images; An estimator for estimating a shadow normal for the 3D object based on the reconstructed spatially changing surface reflectometer information; And an acquiring unit acquiring 3D geometry for the 3D object based on the estimated shadow normal.
  • the reconstruction unit estimates base surface reflectometer information for the 3D object based on the received plurality of images, and changes spatially based on the estimated base surface reflectometer information and a spatially changing weight map.
  • the surface reflectometer information can be reconstructed.
  • the reconstruction unit acquires the base geometry of the 3D object based on the camera's external parameters and the plurality of images, and reconstructs the spatially changing surface reflectometer information based on the plurality of images and the basis geometry. You can.
  • the reconstructing unit and the estimating unit may reconstruct the spatially changing surface reflectometer information using an inverse rendering technique that satisfies a preset image forming model for the camera, and estimate the shading normal.
  • the acquiring unit may acquire the 3D geometry using a Poisson surface reconstruction technique.
  • the reconstructing unit updates the spatially changing surface reflectometer information based on the obtained three-dimensional geometry and the plurality of images, and the estimating unit is configured to shade the normal based on the updated spatially changing surface reflectometer information. And updating, the obtaining unit may update the 3D geometry based on the updated shadow normal.
  • the receiving unit may receive a plurality of images of the 3D object photographed while artificial lighting is on.
  • the reconstruction unit acquires a 3D point cloud for the 3D object from the poses of the plurality of images and the camera using multiview stereo, and based on the 3D point cloud and Poisson surface reconstruction technique, the 3D object To generate a base mesh for, and to subdivide the base mesh to obtain the initial geometry for the three-dimensional object, it is possible to reconstruct the spatially changing surface reflectometer information based on the plurality of images and the initial geometry have.
  • a method for acquiring a 3D object includes receiving a plurality of images of a 3D object taken by a camera; Reconstructing spatially changing surface reflectometer information for the 3D object based on the received plurality of images; And acquiring 3D geometry for the 3D object based on the reconstructed spatially changing surface reflectometer information.
  • the reconstructing may reconstruct spatially changing surface reflectometer information for the 3D object based on the received plurality of images and depth information for the 3D object.
  • BRDF surface reflectometer information
  • the present invention is a technique required for extremely realistic rendering of a 3D model, and thus, it can be applied to virtual reality and augmented reality technologies and various applications using the same.
  • FIG. 1 is a flowchart illustrating an operation of a 3D object acquisition method according to an embodiment of the present invention.
  • Figure 2 shows an exemplary view for explaining the outline of the three-dimensional object acquisition method according to the present invention.
  • Figure 3 shows an exemplary view of the geometry (b) of the present invention with a Rusinkiewicz parameter (a) of a light and camera and a smartphone.
  • FIG. 4 illustrates an exemplary diagram for explaining a process of updating a 3D geometry.
  • FIG. 5 shows an exemplary diagram for a comparison result of the Nehab method and the screened Poisson reconstruction method for the unstructured capture setup of the present invention.
  • FIG. 6 illustrates a configuration of a 3D object acquisition device according to an embodiment of the present invention.
  • SVBRDF surface reflectometer information
  • Embodiments of the present invention are to measure shape information and surface reflectometer information (BRDF) of a 3D object that can be measured only by using special equipment using a camera equipped with a general flash.
  • BRDF surface reflectometer information
  • the present invention provides a simple yet powerful framework that eliminates the need for expensive dedicated hardware, so that the shape information and SVBRDF information of an actual 3D object can be measured simultaneously with a single camera with built-in flash.
  • the present invention can provide a high-quality 3D geometry reconstruction including more accurate high-frequency details than cutting-edge multi-view stereo technology by eliminating diffuse reflection assumptions and utilizing SVBRDF information.
  • the present invention can formulate co-reconstruction of SVBRDFs, shading normals, and 3D geometry into a multi-stage inverse-rendering pipeline, and can also be applied directly to existing multi-view 3D reconstruction techniques.
  • co-reconstruction can be performed in a multi-step, eg, iterative and interactive optimization reverse-rendering pipeline that progressively enhances 3D to 2D correspondence, leading to high quality reconstruction of both SVBRDF and 3D geometry.
  • FIG. 1 is a flowchart illustrating an operation of a 3D object acquisition method according to an embodiment of the present invention.
  • a method for acquiring a 3D object receives a plurality of images of a 3D object photographed by a camera while artificial lighting is turned on (S110).
  • step S110 may receive a plurality of images, for example, pictures, taken by two different exposures.
  • step S110 When a plurality of images photographed by artificial lighting is received in step S110, spatially changing surface reflectometer information (SVBRDF) for a 3D object is reconstructed based on the received plurality of images (S120).
  • SVBRDF spatially changing surface reflectometer information
  • step S120 estimates the base surface reflectometer information (BRDF) for the 3D object based on the received plurality of images, and reconstructs the SVBRDF based on the estimated base surface reflectometer information and the spatially changing weight map. can do.
  • BRDF base surface reflectometer information
  • the shadow normal for the 3D object is estimated based on the reconstructed SVBRDF, and the 3D geometry for the 3D object is obtained based on the estimated shadow normal (S130) , S140).
  • step S140 may acquire a 3D geometry using a Poisson surface reconstruction technique.
  • step S140 it is determined whether the error converges, and if the error is not converged, the update process is repeatedly performed to update the SVBRDF, the shadow normal, and the 3D geometry (S150, S160).
  • the method acquires a 3D point cloud for a 3D object from a plurality of images and poses of a camera using multiview stereo, and the 3D point cloud Based on the Poisson surface reconstruction technique, the method further includes generating a base mesh for the 3D object, and subdividing the base mesh to obtain initial geometry for the 3D object, and step S120 is based on the plurality of images and the initial geometry.
  • SVBRDF can be reconstructed.
  • Table 1 below defines the variables used in the present invention.
  • Figure 2 shows an exemplary view for explaining the outline of the three-dimensional object acquisition method according to the present invention.
  • the method of the present invention uses an artificial lighting image photographed by a general camera, for example, a handheld camera.
  • the present invention can be formulated by acquiring a set of surface reflectometer information (BRDF) F b and SVBRDF information F defined by a corresponding weight map W, shaded normal N and 3D geometry X.
  • BRDF surface reflectometer information
  • the present invention acquires external parameters and rough base geometry of the camera using a commercial 3D reconstruction technique including SfM (Structural From Motion), MVS (Multiview Stereo), and mesh reconstruction in the initialization step. do.
  • the present invention reconstructs SVBRDF information simultaneously while improving the reconstructed 3D geometry using an iterative step.
  • the iterative step begins with an inverse rendering step aimed at obtaining the first approximation for W, F b and N.
  • the method according to the present invention reconstructs the SVBRDF information F, estimates the shadow normal N depending on the SVBRDF information, obtains W, F b and N, and then inputs the deformation input of the shadow normal N through Poisson surface reconstruction.
  • the present invention repeats the inverse-rendering process and the geometry reconstruction process until a predetermined error converges.
  • the present invention only needs to estimate the position of the flash relative to the camera and the camera optical parameters, and the position of the flash and the estimation of the camera optical parameters can be performed only once.
  • the present invention may use multiple chrome ball images and a checkerboard image.
  • the present invention uses a commercial camera with a built-in flash to capture a set of artificial lighting photos as input. Since the dynamic range of commercial cameras is insufficient to capture detailed specular reflections under flash lighting, the exposure time ⁇ t is changed, for example, for a mobile phone with a fixed amount of flash light, the exposure time or flash intensity ⁇ g is changed. Here, the flash intensity may correspond to the EV number of the DSLR camera.
  • the image forming model for the pixel position u can be formulated as ⁇ Equation 1> below.
  • I (u) means the captured image
  • L (o; x) may mean the outgoing radiance emitted from the point x on the 3D geometry in the view direction o.
  • the radiance captured at the point x can be formulated with a reflection equation as shown in ⁇ Equation 2> below.
  • f (i, o; x, n) means the reflection function at point x
  • n means the normal vector
  • L (-i; x) at x for the light vector i It may mean the incident light (incident light).
  • the present invention acquires internal parameters of the camera using a method capable of acquiring internal parameters.
  • the method for acquiring the internal parameter may be various methods, and such a method is obvious to those skilled in the art, and detailed description thereof is omitted.
  • the initial relationship between the camera 2 and the pixel u ⁇ R In response to the captured surface x ⁇ R point 3 is calculated in a perspective (perspective) projection matrix ⁇ R 3 ⁇ 3.
  • t] ⁇ R which defines the external relationship between the surface point and camera for each photo using SfM because the photo is captured without any supporting structure Obtain 3 ⁇ 4 .
  • the present invention can gradually update internal parameters such as a focal length of each picture using SfM for more accurate geometric response by correcting a focus breathing effect due to autofocus. .
  • the present invention acquires a high density 3D point cloud using MVS from captured images and camera poses.
  • this initial point cloud generally violates the basic diffusion texture assumptions of the MVS method due to the high frequency noise caused by the specular reflection generated by flash lighting.
  • the present invention uses a Poisson surface reconstruction screened to reduce this noise to produce a low resolution mesh, eg 2 7 voxel grid mesh.
  • the present invention subdivides the low-resolution mesh to obtain a finer mesh, such as 2 10 grid mesh, used as the initial geometry.
  • fine geometrical details may be missing because it is removed along with the noise in the Poisson reconstruction step.
  • the present invention recovers these details using an iterative geometry update algorithm.
  • the present invention satisfies the image forming model of Equation 2 by finding a set of two unknowns ⁇ f (i p, k , o p, k ; x p , n p ), n p ⁇ that minimizes the objective function.
  • the inverse-rendering problem can be formulated as ⁇ Equation 3> below.
  • ⁇ p, k may mean the visibility function of the vertex x p of the image k.
  • the inverse rendering problem of factoring reflections and shading is a very ill-posed and underdetermined problem.
  • the present invention uses an iterative optimization method that updates four unknown elements W, F b , N and X until the rendering result satisfies the input image.
  • the present invention separates the input picture into different training and test data sets to test optimized parameters with unused data sets.
  • the present invention reconstructs the entire space of the SVBRDF, and obtains a normal using this information.
  • the reconstructed SVBRDF F can be formulated as ⁇ Equation 4> below.
  • W ⁇ w p , b ⁇ may mean a set of per-point blending weights.
  • Flash pictures set up Flash photography setup
  • the reflection is kept almost constant along the azimuth angle ⁇ d of the light i around h.
  • the data set captured in the setup of the present invention includes high density sampling along the ⁇ h and ⁇ d dimensions.
  • high-density sampling leads to a single sample for ⁇ d at about ⁇ 5 ° because the light and camera are fixed and very close in the setup of the present invention.
  • the present invention uses a Cook-Torrance (CT) model with a non-parametric normal distribution function (NDF) term for better representation of specular reflection.
  • CT Cook-Torrance
  • NDF non-parametric normal distribution function
  • the present invention does not rely on an analytical function such as Beckmann that performs high-density sampling along ⁇ h and ⁇ d angles.
  • the base reflection model f b of the present invention can be expressed as ⁇ Equation 5> below.
  • ⁇ d and ⁇ s denote diffusion and reflection albedo
  • D denotes a univariate NDF term for specularity
  • G denotes a geometric term
  • F denotes Fresnel Term.
  • NDF is represented by the non-parametric table function D ( ⁇ h ) ⁇ R M.
  • D ⁇ h
  • R M the non-parametric table function
  • each element stores the BRDF value in the corresponding ⁇ h using the square root mapping of the angle.
  • the present invention can use a V-groove cavity model for shading / masking term G.
  • the F term can be set as a constant. In practice, this setup helps reduce complexity in the optimization process. At this time, this approximation may be more effective than using a constant index of refraction (IOR) value for the Fresnel term.
  • the geometric factor G is excluded from the coefficient vector.
  • the invention converts the reflection f 'p, k captured the captured radiance from the equation (2).
  • the present invention The base BRDFs f b and the spatial weights w p , b can be mixed to approximate. Then, the present invention can reform Equation 3 as an objective function as shown in ⁇ Equation 6> below to reconstruct base BRDFs and corresponding weights.
  • the present invention can fix D ( ⁇ h > 60) to 0 following conventional distribution functions such as Beckmann and GGX.
  • the present invention can apply a non-negative monotonicity constraint to D ( ⁇ h ).
  • D ( ⁇ h ) may decrease monotonically as ⁇ h increases.
  • no softness limits are placed.
  • Equation 6 is minimized while W is fixed. This becomes a quadratic programming problem in F b with sparse input data.
  • the present invention may use a commercial rare secondary programming solver.
  • the present invention estimates the set of diffusion components of the base BRDF F b by averaging color observations around median brightness per vertex clustering using Kmean in CIELAB space. This produces an initial binary labeled weight set W.
  • the underlying BRDFs are still a problem to be solved.
  • an ad-hoc method is used to gradually determine the number of bases and increase it until the optimization converges.
  • the number of bases may be set using previous techniques, and the number of bases may vary depending on the situation.
  • the present invention updates W using F b estimated in the previous optimization. Updating W by using the fixed F b of Equation 6 is the same as minimizing the objective function as shown in Equation 7 below at each point x p .
  • the kth row of Q ⁇ R K + B is [f 1 (i p, k , o p, k ), ..., f B (i p, k , o p, k )], r ⁇ R K
  • the kth element of is L (o p, k ; x p ) / L (-i p, k ; x p ) / (n p ⁇ i p, k ).
  • Equation 7 is a standard secondary programming problem.
  • the present invention can solve this problem by using the convex quadratic programming solver.
  • the present invention uses color based BRDF and monochromatic mixing weights.
  • the present invention can independently optimize each color channel when updating F b , and optimize the mixing weight of each vertex (vertex) using all color channels when updating W.
  • the present invention feeds the initial surface normal n p from the current geometry updated in the previous iteration as an input variable for BRDF f at point x p in Equation 3 above.
  • the present invention estimates the weight W, the base BRDFs F b and the shadow normal N, and then reconstructs the geometry X to match the shadow observation.
  • 4 illustrates an exemplary diagram for explaining a process of updating a 3D geometry.
  • the process of updating the 3D geometry is obtained by obtaining a rough base geometry (or base mesh) from the initial point cloud and subdividing it into a finer mesh.
  • the present invention estimates the shadow normal using the subdivided mesh, and then updates the geometry using the estimated shadow normal.
  • the geometry update may be performed using at least one of a Nehab method and a screened Poisson reconstruction method.
  • the present invention preferably uses a screened Poisson method designed to reconstruct an implicit surface using a three-dimensional tri-quadratic B-spline basis function in a coarse-to-fine method voxel lattice. This results in robust performance when integrating noise-containing surface normals into 3D geometry.
  • FIG. 5 shows an exemplary diagram for a comparison result of the Nehab method and the screened Poisson reconstruction method for the unstructured capture setup of the present invention.
  • the result of the Nehab method is the unstructured capture setup While it contains the high frequency noise of the shadow normal estimated at, it can be seen that the geometry of the screened Poisson reconstruction method robustly processes the input noise through coarse-to-fine reconstruction, thereby providing more detailed details.
  • the screened Poisson reconstruction method implicitly surfaces from the input point cloud. Can be reconstructed as ⁇ Equation 8> below.
  • V: R 3- > R 3 means a vector field derived from a set of shaded normals N
  • ⁇ ⁇ (x p ) means the gradient of the implicit scalar surface function
  • ⁇ 2 (x p ) means the squared distance between the point x p and the implicit surface ⁇
  • can mean the weight of the normalization term.
  • the present invention determines ⁇ [0.1,4.0] according to the reliability of the initial geometry.
  • the resolution of the voxel grating can be set to 2 9 or 2 10 for each dimension. This size corresponds to approximately 0.1 to 0.2 mm for the captured physical object.
  • the original algorithm uses geometric normals, but the present invention utilizes shaded normals n to p to implicit surfaces whose gradients match n to p. Aim to seek. In other words, if different views and lighting directions are given for each vertex, a consistent shadow should appear.
  • the present invention converts the implicit surface to a polygonal mesh by applying marching cubes.
  • the present invention repeatedly updates W, F b , N, and X until it finds the optimal 3D geometry and SVBRDF.
  • the present invention evaluates the Hausdorff distance between the old mesh and the new X.
  • the entire process shown in FIG. 2 is repeated until the test RMS error of the photometric difference of Equation 3 of the present invention starts to increase.
  • the present invention randomly separates captured images into training and test groups at a 9: 1 ratio.
  • the method for acquiring a 3D object uses the general flash-mounted camera to measure shape information of a 3D object and spatially changing surface reflectometer information (SVBRDF), which could only be measured using special equipment. ) Can be measured simultaneously.
  • SVBRDF spatially changing surface reflectometer information
  • the 3D object acquisition method can measure high quality 3D shape information and SVBRDF without relying on special equipment.
  • the method for acquiring a 3D object according to an embodiment of the present invention requires only a single commercial camera with a built-in flash, for example, a smartphone camera, because multiple artificial lighting photos are used as input.
  • the core of the 3D object acquisition method is a new co-reconstruction of SVBRDF, shading normals, and 3D geometry
  • the co-reconfiguration is a multi-step e.g. iteratively enhancing 3D to 2D correspondence And can be performed in an interactive optimization inverse-rendering pipeline to induce high-quality reconstruction of both SVBRDF and 3D geometry.
  • the method according to embodiments of the present invention reconstructs spatially changing surface reflectometer information for a 3D object based on a plurality of images photographed by a camera, and thus reconstructed spatially changing surface reflection Acquiring 3D geometry for a 3D object based on system information, and estimating the shading normal for a 3D object using the reconstructed spatially changing surface reflectometer information, and then based on the estimated shading normal It is not limited to obtaining 3D geometry for a 3D object. That is, the method according to another embodiment of the present invention can be applied to a variety of methods for acquiring 3D geometry for a 3D object based on spatially changing surface reflectometer information.
  • a method reconstructs spatially changing surface reflectometer information for a 3D object based on a plurality of images photographed by a camera, and reconstructed spatially changing surface reflectometer information Based on the information, it is possible to obtain 3D geometry for a 3D object.
  • the technical configuration of reconstructing spatially changing surface reflectometer information for a 3D object is not limited to being reconstructed only by a plurality of images photographed by a camera.
  • additional depth information that is, depth information for a 3D object may be reflected to improve the weight and speed of the algorithm.
  • the method according to embodiments of the present invention can reconstruct spatially changing surface reflectometer information for a 3D object based on depth information for a 3D object as well as a plurality of images photographed by a camera. have.
  • the depth information for the 3D object may be obtained separately using a depth camera, or may be obtained using various measurement techniques.
  • the present invention uses a multi-view camera to depth a 3D object. Information may be obtained, or depth information on a 3D object may be obtained using stereo matching.
  • the method for obtaining depth information in the present invention is not limited to the above-described method, and any method applicable to the present invention can be used.
  • a method of reconstructing spatially changing surface reflectometer information for a 3D object by additionally using the depth information may be applied to all of the above-described contents.
  • FIG. 6 illustrates a configuration of a 3D object acquisition device according to an embodiment of the present invention, and shows a conceptual configuration of a device performing the method of FIGS. 1 to 5.
  • a 3D object acquisition apparatus 600 includes a reception unit 610, a reconstruction unit 620, an estimation unit 630, and an acquisition unit 640.
  • the reception unit 610 receives a plurality of images of a 3D object photographed by a camera.
  • the receiving unit 610 may receive a plurality of images of the 3D object photographed with artificial lighting turned on.
  • the reconstruction unit 620 reconstructs spatially changing surface reflectometer information for a 3D object based on the received plurality of images.
  • the reconstruction unit 620 estimates the base surface reflectometer information for the 3D object based on the plurality of images, and spatially changes based on the estimated base surface reflectometer information and the spatially changing weight map.
  • the surface reflectometer information can be reconstructed.
  • the reconstruction unit 620 acquires the base geometry of the 3D object based on the camera's external parameters and a plurality of images, and reconstructs spatially changing surface reflectometer information based on the plurality of images and the base geometry. You can.
  • the reconstruction unit 620 may reconstruct spatially changing surface reflectometer information using an inverse rendering technique that satisfies a preset image forming model for the camera.
  • the reconstruction unit 620 may update spatially changing surface reflectometer information based on the 3D geometry and a plurality of images acquired by the acquisition unit 640.
  • the reconstruction unit 620 acquires a 3D point cloud for a 3D object from a pose of a plurality of images and cameras using multiview stereo, and 3D based on a 3D point cloud and a Poisson surface reconstruction technique It is possible to generate a base mesh for an object, to subdivide the base mesh to obtain initial geometry for a 3D object, and reconstruct spatially changing surface reflectometer information based on a plurality of images and the initial geometry.
  • the estimator 630 estimates the shadow normal for the 3D object based on the reconstructed spatially changing surface reflectometer information.
  • the estimator 630 may estimate the shadow normal using an inverse rendering technique that satisfies a preset image forming model for the camera.
  • the estimator 630 may update the shading normal based on the updated spatially changing surface reflectometer information.
  • the acquisition unit 640 acquires 3D geometry based on the estimated shadow normal and Poisson surface reconstruction technique.
  • the acquisition unit 640 may acquire a 3D geometry using a Poisson surface reconstruction technique.
  • the acquirer 640 may update the 3D geometry based on the updated shadow normal.
  • each of the constituent elements constituting FIG. 6 may include all the contents described in FIGS. 1 to 5, which is apparent to those skilled in the art.
  • the device described above may be implemented with hardware components, software components, and / or combinations of hardware components and software components.
  • the devices and components described in the embodiments include, for example, a processor, controller, arithmetic logic unit (ALU), digital signal processor (micro signal processor), microcomputer, field programmable array (FPA), It may be implemented using one or more general purpose computers or special purpose computers, such as a programmable logic unit (PLU), microprocessor, or any other device capable of executing and responding to instructions.
  • the processing device may run an operating system (OS) and one or more software applications running on the operating system.
  • the processing device may access, store, manipulate, process, and generate data in response to the execution of the software.
  • a processing device may be described as one being used, but a person having ordinary skill in the art, the processing device may include a plurality of processing elements and / or a plurality of types of processing elements. It can be seen that may include.
  • the processing device may include a plurality of processors or a processor and a controller.
  • other processing configurations such as parallel processors, are possible.
  • the software may include a computer program, code, instruction, or a combination of one or more of these, and configure the processing device to operate as desired, or process independently or collectively You can command the device.
  • Software and / or data may be interpreted by a processing device, or to provide instructions or data to a processing device, of any type of machine, component, physical device, virtual equipment, computer storage medium or device.
  • the software may be distributed on networked computer systems, and stored or executed in a distributed manner.
  • Software and data may be stored in one or more computer-readable recording media.
  • the method according to the embodiment may be implemented in the form of program instructions that can be executed through various computer means and recorded on a computer readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, or the like alone or in combination.
  • the program instructions recorded on the medium may be specially designed and configured for the embodiments or may be known and usable by those skilled in computer software.
  • Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs, DVDs, and magnetic media such as floptical disks.
  • -Hardware devices specially configured to store and execute program instructions such as magneto-optical media, and ROM, RAM, flash memory, and the like.
  • Examples of program instructions include high-level language code that can be executed by a computer using an interpreter, etc., as well as machine language codes produced by a compiler.

Abstract

L'invention concerne un procédé d'acquisition d'un objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel, et un dispositif associé. Un procédé d'acquisition d'objet tridimensionnel selon un mode de réalisation de la présente invention comprend les étapes consistant à : recevoir une pluralité d'images obtenues par photographie d'un objet tridimensionnel par une caméra ; reconstruire des fonctions de distribution de réflectance bidirectionnelle à variation spatiale pour l'objet tridimensionnel, sur la base de la pluralité d'images reçues ; estimer des normales d'ombrage pour l'objet tridimensionnel sur la base des fonctions de distribution de réflectance bidirectionnelle à variation spatiale reconstruites ; et obtenir une géométrie tridimensionnelle pour l'objet tridimensionnel sur la base des normales d'ombrage estimées.
PCT/KR2019/013099 2018-10-08 2019-10-07 Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé WO2020076026A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19813220.1A EP3664039B1 (fr) 2018-10-08 2019-10-07 Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé
US16/622,234 US11380056B2 (en) 2018-10-08 2019-10-07 3D object acquisition method and apparatus using artificial light photograph

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR10-2018-0119872 2018-10-08
KR20180119872 2018-10-08
KR20180173852 2018-12-31
KR10-2018-0173852 2018-12-31
KR10-2019-0120410 2019-09-30
KR1020190120410A KR102287472B1 (ko) 2018-10-08 2019-09-30 인공조명 사진을 이용한 3차원 객체 획득 방법 및 그 장치

Publications (1)

Publication Number Publication Date
WO2020076026A1 true WO2020076026A1 (fr) 2020-04-16

Family

ID=70164981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/013099 WO2020076026A1 (fr) 2018-10-08 2019-10-07 Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé

Country Status (1)

Country Link
WO (1) WO2020076026A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255313A (zh) * 2022-02-28 2022-03-29 深圳星坊科技有限公司 镜面物体三维重建方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050066501A (ko) * 2003-12-26 2005-06-30 한국전자통신연구원 필터링된 환경맵을 이용한 역렌더링 장치 및 그 방법
KR101495299B1 (ko) * 2013-09-24 2015-02-24 한국과학기술원 3차원 형상 획득 장치 및 그 획득 방법
KR20160147491A (ko) * 2015-06-15 2016-12-23 한국전자통신연구원 3차원 모델 생성 장치 및 방법
KR101865886B1 (ko) * 2016-12-09 2018-06-08 한국과학기술원 근적외선 영상을 이용한 물체 표현의 기하 구조 및 반사도 추정 방법, 그리고 이를 구현한 시스템
KR20180062959A (ko) * 2016-12-01 2018-06-11 톰슨 라이센싱 모바일 디바이스의 환경의 3d 재구성을 위한 방법, 대응하는 컴퓨터 프로그램 제품 및 디바이스

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050066501A (ko) * 2003-12-26 2005-06-30 한국전자통신연구원 필터링된 환경맵을 이용한 역렌더링 장치 및 그 방법
KR101495299B1 (ko) * 2013-09-24 2015-02-24 한국과학기술원 3차원 형상 획득 장치 및 그 획득 방법
KR20160147491A (ko) * 2015-06-15 2016-12-23 한국전자통신연구원 3차원 모델 생성 장치 및 방법
KR20180062959A (ko) * 2016-12-01 2018-06-11 톰슨 라이센싱 모바일 디바이스의 환경의 3d 재구성을 위한 방법, 대응하는 컴퓨터 프로그램 제품 및 디바이스
KR101865886B1 (ko) * 2016-12-09 2018-06-08 한국과학기술원 근적외선 영상을 이용한 물체 표현의 기하 구조 및 반사도 추정 방법, 그리고 이를 구현한 시스템

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3664039A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255313A (zh) * 2022-02-28 2022-03-29 深圳星坊科技有限公司 镜面物体三维重建方法、装置、计算机设备和存储介质

Similar Documents

Publication Publication Date Title
KR102287472B1 (ko) 인공조명 사진을 이용한 3차원 객체 획득 방법 및 그 장치
Nam et al. Practical svbrdf acquisition of 3d objects with unstructured flash photography
Moreno et al. Simple, accurate, and robust projector-camera calibration
WO2015188685A1 (fr) Procédé d'acquisition de modèle de mannequin sur base d'une caméra de profondeur un système d'adaptation virtuel de réseau
WO2019098728A1 (fr) Procédé et appareil de traitement d'images tridimensionnelles
WO2016145602A1 (fr) Appareil et procédé de réglage de longueur focale et de détermination d'une carte de profondeur
US20120019651A1 (en) Installation of 3d inspection of electronic circuits
EP3382645A2 (fr) Procédé de génération d'un modèle 3d à partir de structure from motion et stéréo photométrique d'images 2d parcimonieuses
WO2016200013A1 (fr) Dispositif optique et procédé de génération d'informations de profondeur
WO2020076026A1 (fr) Procédé d'acquisition d'objet tridimensionnel à l'aide d'une photographie d'éclairage artificiel et dispositif associé
US9036024B2 (en) Apparatus for optically inspecting electronic circuits
JP5799473B2 (ja) 撮影装置とその撮影方法
JP5059503B2 (ja) 画像合成装置、画像合成方法及び画像合成プログラム
WO2020159223A1 (fr) Procédé et dispositif d'imagerie d'une image hyperspectrale sans lentille
Mohan et al. Table-top computed lighting for practical digital photography
US20240062460A1 (en) Freestyle acquisition method for high-dimensional material
JP2015059849A (ja) 色と三次元形状の計測方法及び装置
CN109325912A (zh) 基于偏振光光场的反光分离方法及标定拼合系统
WO2023287220A1 (fr) Procédé de transmission de données de nuage de points, dispositif de transmission de données de nuage de points, procédé de réception de données de nuage de points, et dispositif de réception de données de nuage de points
WO2022203464A2 (fr) Procédé de mise en correspondance stéréo omnidirectionnelle en temps réel à l'aide d'objectifs fisheye à vues multiples et système associé
WO2020085758A1 (fr) Procédé de détermination de zone d'inspection et appareil d'inspection d'aspect externe l'utilisant
Park et al. Projector compensation framework using differentiable rendering
Yeh et al. Shape-from-shifting: Uncalibrated photometric stereo with a mobile device
KR20180022070A (ko) 3d 스캐너 겸용 실물 화상기
JP2013096784A (ja) 表面特性測定装置及びコンピュータプログラム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019813220

Country of ref document: EP

Effective date: 20191211

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19813220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE