US6903738B2 - Image-based 3D modeling rendering system - Google Patents
Image-based 3D modeling rendering system Download PDFInfo
- Publication number
- US6903738B2 US6903738B2 US10/173,069 US17306902A US6903738B2 US 6903738 B2 US6903738 B2 US 6903738B2 US 17306902 A US17306902 A US 17306902A US 6903738 B2 US6903738 B2 US 6903738B2
- Authority
- US
- United States
- Prior art keywords
- images
- opacity
- hull
- acquiring
- viewpoints
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—Three-dimensional [3D] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three-dimensional [3D] modelling for computer graphics
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
Definitions
- the invention relates generally to computer graphics, and more particularly to acquiring images of three-dimensional physical objects to generate 3D computer graphics models that can be rendered in realistic scenes using the acquired images.
- Three-dimensional computer graphics models are used in many computer graphics applications. Generating 3D models manually is time consuming, and causes a bottleneck for many practical applications. Besides the difficulty of modeling complex shapes, it is often impossible to replicate the geometry and appearance of complex objects using prior art parametric reflectance models.
- Image-based rendering can be used.
- Image-based representations have the advantage of capturing and representing an object regardless of the complexity of its geometry and appearance.
- Prior art image-based methods allowed for navigation within a scene using correspondence information, see Chen et al., “View Interpolation for Image Synthesis,” Computer Graphics ,” SIGGRAPH 93 Proceedings, pp. 279-288, 1993, and McMillan et al., “Plenoptic Modeling: An Image-Based Rendering System,” Computer Graphics , SIGGRAPH 95 Proceedings, pp. 39-46, 1995. Because this method does not construct a model of the 3D object, it is severely limited.
- Surface light fields are capable of reproducing important global lighting effects, such as inter-reflections and self-shadowing. Images generated with a surface light field usually show the object under a fixed lighting condition. To overcome this limitation, inverse rendering methods estimate the surface BRDF from images and geometry of the object.
- An alternative method uses image-based, non-parametric representations for object reflectance, see Marschner et al., “Image-based BRDF Measurement Including Human Skin,” Proceedings of the 10 th Eurographics Workshop on Rendering , pp. 139-152, 1999. They use a tabular BRDF representation and measure the reflectance properties of convex objects using a digital camera. Their method is restricted to objects with a uniform BRDF, and they incur problems with geometric errors introduced by 3D range scanners.
- Image-based relighting can also be applied to real human faces by assuming that the surface reflectance is Lambertian, see Georghiades et al., “Illumination-Based Image Synthesis: Creating Novel Images of Human Faces under Differing Pose and Lighting,” IEEE Workshop on Multi - View Modeling and Analysis of Visual Scenes , pp. 47-54, 1999.
- a surface reflectance field of an object is defined as the radiant light from a surface under every possible incident field of illumination. It is important to note that, despite the word reflectance, the surface reflectance field captures all possible surface and lighting effects, including surface texture or appearance, refraction, dispersion, subsurface scattering, and non-uniform material variations.
- Zongker et al. describe techniques of environment matting to capture mirror-like and transparent objects, and to correctly composite them over arbitrary backgrounds, see Zongker et al., “Environment Matting and Compositing. In Computer Graphics , SIGGRAPH 1999 Proceedings, pp. 205-214, August 1999. Their system is able to determine the direction and spread of the reflected and refracted rays by illuminating a shiny or refractive object with a set of coded light patterns. They parameterize surface reflectance into 2D environment mattes. Extensions to environment matting include a more accurate capturing method and a simplified and less accurate procedure for real time capture of moving objects. However, their system only captures environment mattes for a fixed viewpoint, and they do not reconstruct the 3D shape of the object.
- the invention provides a system and method for acquiring and rendering high quality graphical models of physical objects, including objects constructed of highly specular, transparent, or fuzzy materials, such as fur and feathers, that are difficult to handle with traditional scanners.
- the system includes a turntable, an array of digital cameras, a rotating array of directional lights, and multi-color backlights. The system uses the same set of acquired images to both construct a model of the object, and to render the object for arbitrary lighting and points of view.
- the system uses multi-background matting techniques to acquire alpha mattes of 3D objects from multiple viewpoints by placing the objects on the turntable.
- the alpha mattes are used to construct an opacity hull.
- An opacity hull is a novel shape model with view-dependent opacity parameterization of the surface hull of the object.
- the opacity hull enables rendering of complex object silhouettes and seamless blending of objects into arbitrary realistic scenes.
- Computer graphics models are constructed by acquiring radiance, reflection, and refraction images.
- the models can also be lighted from arbitrary directions using a surface reflectance field, to produce a purely image-based appearance representation.
- the system is unique in that it acquires and renders surface appearance under varying illumination from arbitrary viewpoints.
- the system is fully automatic, easy to use, has a simple set-up and calibration.
- the models acquired from objects can be accurately composited into synthetic scenes.
- FIG. 1 is diagram of the system according to the invention
- FIG. 2 is a flow diagram of a method for acquiring images used by the invention
- FIG. 3 is a flow diagram of a method for constructing an opacity hull according to the invention.
- FIG. 4 is a block diagram of a surface element data structure
- FIGS. 5 a-b are schematics of lighting sectors
- FIG. 6 is a schematic of a reprojection of a Gaussian environment matte
- FIG. 7 is a schematic of matching reflective and refractive Gaussians.
- FIG. 1 shows a system 100 for modeling and rendering a 3D object 150 according to the invention.
- the system 100 includes an array of directional overhead lights 110 , a multi-color backlight 120 and a multi-color bottomlight 121 , cameras 130 , and a turntable 140 .
- the output of the cameras 130 i.e., sets of images 161 - 166 , is connected to a processor 160 .
- the processor 160 can be connected to an output device 170 for rendering an acquired model 180 into arbitrary realistic scenes.
- the cameras 130 or turntable can be rotated 140 to image the object from multiple viewpoints. The rotation provides coherence of sampled surface points.
- the array of overhead lights 110 can be rotated.
- the lights are spaced roughly equally along elevation angles of a hemisphere.
- the overhead lights can be fixed, rotate around an object for a fixed point of view, or made to rotate with the object.
- the light array 110 can hold four, six, or more directional light sources. Each light uses a 32 Watt HMI Halogen lamp and a light collimator to approximate a directional light source at infinity.
- the overhead lights are controlled by an electronic light switch and dimmers. The dimmers are set so that the camera's image sensor is not saturated for viewpoints where the overhead lights are directly visible by the cameras. The lights can be controlled individually.
- the multi-color backlight and bottomlight 120 - 121 are in the form of large-screen, high-resolution, plasma color monitors that can illuminate the object with light of any selected color.
- the two plasma monitors have a resolution of 1024 ⁇ 768 pixels. We light individual pixels to different colors as described below.
- the cameras 130 are connected via a FireWire link to the processor 160 , which is a 700 MHz Pentium III PC with 512 MB of RAM. We alternatively use 15 mm or 8 mm C mount lenses, depending on the size of the acquired object.
- the cameras 130 are also spaced equally along elevation angles of a hemisphere generally directed at the backlights, so that the object is in a foreground between the cameras and the backlights. To facilitate consistent backlighting, we mount the cameras roughly in the same vertical plane directly opposite the backlight 120 .
- the object 150 to be digitized, modeled, and rendered is placed on the bottomlight 121 , which rests on the turntable 140 .
- the cameras 130 are pointed at the object from various angles (viewpoints). To facilitate consistent backlighting, we mount the cameras roughly in the same vertical plane as the backlight 120 .
- the backlight 120 is placed opposite the cameras and illuminate the object substantially from behind, as viewed by the cameras.
- the bottomlight is placed beneath the object 150 .
- our 3D digitizing system 100 combines both active and passive imaging processes 200 during operation.
- the object 150 is rotated on the turntable while sets of images 161 - 166 are acquired.
- the rotation can be in ten degree steps so that the object is imaged from 6 ⁇ 36 different viewpoints, or the turntable positions can be user specified.
- the angular positions of the turntable are repeatable so that all images are registered with each other using acquired calibration parameters 211 .
- the cameras 130 acquire up to six sets of images 161 - 166 , i.e., calibration images 161 , reference images 162 , object images 163 , radiance images 164 , reflection images 165 , and refraction images 166 , respectively.
- the order of acquisition is not important, although the processing of the images should be in the order as indicated. Calibration only needs to be performed once in a pre-processing step for a particular arrangement of the turntable and cameras.
- the acquisition and processing of the images can be fully automated after the object has been placed on the turntable. In other words, the model 180 is acquired completely automatically.
- the set of calibration images 161 is acquired of a calibration object decorated with a known calibration pattern.
- the set of reference images 162 is acquired while a backdrop pattern is displayed on the monitors 120 - 121 without the foreground object 150 in the scene.
- the set of object images 163 is acquired while the object 150 is illuminated by the backlight and bottomlight to construct alpha mattes.
- the sets of radiance, reflection, and refraction images 164 - 166 are used to construct a surface reflectance field 251 , as described in greater detail below. We use the traditional definition of surface reflectance fields, which models the radiant light from an object surface under every possible incident field of illumination.
- a key difference with the prior art is that our system uses multi-color back- and bottomlights for alpha matte extraction for multiple viewpoints and for the construction of an opacity hull according to the invention. As described below, the availability of approximate geometry and view-dependent alpha mattes greatly extends the different types of objects that an be modeled. Another key difference with the prior art is that our system uses a combination of rotating overhead lights, backlights, and a turntable to acquire a surface reflectance field of the object from multiple viewpoints.
- the cameras can be rotated around the object, and multiple plasma monitors can be placed around the object, and the alpha mattes can be acquired using traditional passive computer vision techniques.
- Our images 161 - 162 are acquired using a high dynamic range (HDR) technique. Because raw output from the cameras 130 is available, the relationship between exposure time and radiance values is linear over most of the operating range. For each viewpoint, we take four frames with exponentially increasing exposure times and use a least squares linear fit to determine this response line. Our HDR imager has ten bits of precision. Therefore, when we use the term “image,” we mean an HDR image that is constructed from four individual dynamic range frames.
- HDR high dynamic range
- the image acquisition sequence 200 starts by placing a calibration object onto the turntable 140 and, if necessary, adjusting the position and aperture of the cameras 130 .
- a 36 -image sequence of the rotated calibration target is taken from each of the cameras.
- Intrinsic and extrinsic camera parameters 211 are determined using a calibration procedure for turntable systems with multiple cameras.
- each patterned backdrop is photographed alone without the object 150 in the foreground.
- Reference images only have to be acquired once after calibration. They are stored and used for subsequent object modeling.
- the object 150 is then put on the turntable 140 , and the set of object images 163 is acquired 230 while both plasma monitors 120 - 121 illuminate the object 150 from below and behind with the patterned backdrops, while the turntable 140 goes through a first rotation.
- the reference image 162 and object images 163 are used to determine the alpha mattes and the opacity hull as described below. As described below, we acquire the reference and object images using one ore more out-of-phase colored backdrops.
- the set of radiance images 164 are acquired 240 using just the array overhead directional lights 110 .
- the directional lights 110 can be fixed or made to rotate with the object. Coupled rotation leads to greater coherence of radiance samples for each surface point, because view-dependent effects, such as highlights, remain at the same position on the surface. One or more lights can be used.
- the radiance images are used to construct the surface reflectance field 251 , which can be used during rendering.
- the display surface with black felt without upsetting the object position we only rotate the lights 110 to cover a sector ' ⁇ l , described in greater detail below with reference to FIG. 5 b .
- the array of lights 110 is rotated around the object 150 while the object remains in a fixed position. For each rotation position, each light along the elevation of the light array is turned on sequentially and an image is captured with each camera. We use four lights and typically increment the rotation angle by 24 degrees for a total of 4 ⁇ 15 images for each camera position. This procedure is repeated for all viewpoints.
- the reflection images 165 are used to construct the low resolution surface reflectance field 251 as described below.
- the acquisition process involves taking multiple images of the foreground object in front of a backdrop with a 1D Gaussian profile that is swept over time in horizontal, vertical, and diagonal direction of the plasma monitors. Using the non-linear optimization procedure, we then solve for a and the parameters of the 2D Gaussians G.
- the environment matte is subdivided into 8 ⁇ 8 pixel blocks. Each surface point on the opacity hull that is visible from this view is projected into the image. Only those blocks that contain at least one back-projected surface point are stored and processed.
- the rim of the plasma monitors is visible through transparent object, which makes much of the field of view unusable. Consequently, we only use the lower and the two upper most cameras for acquisition of environment mattes.
- the lower camera is positioned horizontally, directly in front of the background monitor.
- the two upper cameras are positioned above the monitor on the turntable. Then using our environment matte interpolation as described below, we can render plausible results for any viewpoint.
- BSSRDF bi-directional sub-surface reflectance distribution field
- the value C is the recorded color value at each camera pixel
- E is the environment illumination from a direction ⁇ i
- W is a weighting function that comprises all means of light transport from the environment through the foreground object 150 to the cameras 130 .
- the integration is carried out over the entire hemisphere ' ⁇ , and for each wavelength.
- all equations are evaluated separately for the R, G, and B color channels to reflect this wavelength dependency.
- Our scanning system 100 provides multiple illumination fields for the environment.
- the first illumination is with a high-resolution 2D texture map produced by the backlights by displaying a color pattern on the plasma monitors 120 - 121 .
- the second illumination is by the overhead light array 110 from a sparse set of directions on the remaining hemisphere, as shown in FIG. 5 b.
- W h ( x ) a 1 G 1 ( x, C 1 , ⁇ 1 , ⁇ 1 )+ a 2 G 2 ( x, C 2 , ⁇ 2 , ⁇ 2 ).
- G 1 and G 2 are elliptical, oriented 2D unit-Gaussians, and a 1 and a 2 are their amplitudes, respectively
- x are the camera pixel coordinates
- C 1 the center of each Gaussian
- ⁇ i are their standard deviations
- ⁇ i their orientations
- the parameters a and G using observed data C, i.e., pixel values in the images 161 - 166 from multiple points of view. For each viewpoint, the estimated parameters are stored in an environment matte for (a 1 , G 1 , a 2 , G 2 ), and reflection images for R( ⁇ i ).
- each visible surface point is reprojected to look up the parameters a, G and R from the k-nearest environment mattes and reflection images.
- the resulting parameters are then combined using the above equation to determine the color for pixel C n .
- This equation is a compromise between high-quality environment matting, as described by Chuang, and our 3D acquisition system 100 .
- Using a high-resolution environment texture in viewing direction is superior to using only the light array to provide incident illumination. For example, looking straight through an ordinary glass window shows the background in its full resolution.
- using a high-resolution illumination environment is only feasible with environment matting. The alternative would be to store a very large number of reflection images for each viewpoint, which is impractical. Environment mattes are in essence a very compact representation for high-resolution surface reflectance fields.
- the surface reflectance field is most equivalent to the BSSRDF.
- the main differences are that we do not know the exact physical location of a ray-surface intersection, and that the incoming direction of light is the same for any point on the surface.
- Our model differs from Chuang et al. by restricting the number of incoming ray bundles from the monitors to two, and by replacing the foreground color F in Chuang with a sum over surface reflectance functions F i .
- the first assumption is valid if reflection and refraction at an object causes view rays to split into two distinct ray bundles that strike the background, see FIG. 5 a .
- the second assumption results in a more accurate estimation of how illumination from sector ' ⁇ l affects the object's foreground color.
- Color spill is due to the reflection of backlight on the foreground object 150 .
- Color spill typically happens near the edges of object silhouettes because the Fresnel effect increases the secularity of materials near grazing angles.
- spill is particularly prominent for highly specular surfaces, such as shiny metals or ceramics.
- C i ⁇ ( x , y , n ) ( 1 + n ⁇ ⁇ sin ⁇ ( 2 ⁇ ⁇ ⁇ ( x + y ) ⁇ + i ⁇ ⁇ 3 ) ) ⁇ 127 , ( 1 )
- n is a phase difference
- ⁇ is the width of a stripe.
- Equation (2) If we measure the same color at a pixel both with and without the object for each background, Equation (2) equals zero. This corresponds to a pixel that maps straight through from the background to the camera. The phase shifts in the color channels of Equation (1) assures that the denominator of Equation (2) is never zero. The sinusoidal pattern reduces the chance that a pixel color observed due to spill matches the pixel color of the reference image. Nevertheless, it is still possible to observed spill errors for highly specular objects.
- a threshold of ⁇ >0.05 yields a segmentation that covers all of the object and parts of the background. This threshold also gives an upper bound on the accuracy of our system, because when opacity values are below this threshold, the object will not be modeled accurately.
- n a can range from 5 to 100. Pixels inside the silhouette are assigned an alpha value of one.
- the binary silhouettes are then used to construct 330 the image-based surface hull 340 . This process also removes improperly classified foreground regions, unless the regions are consistent with all other images. We re-sample the IBVH into a dense set of surface points as described below.
- the alpha mattes 310 are then projected 350 onto the surface hull 360 to construct the opacity hull 360 .
- visibility of each surface point to each image is determined using epipolar geometry.
- the opacity hull 360 stores opacity values for each surface point.
- the opacity hull stores an alphasphere A for each surface point. If w is an outward pointing direction at a surface point p, then A(p, w) is an opacity value ⁇ p seen from the direction w. Note that the function that defines the alphasphere A should be continuously over the entire sphere. However, any physical system can acquire only a sparse set of discrete samples. We could estimate a parametric function to define each alphasphere A. However, approximating the alphasphere with a parametric function would be very difficult in many cases.
- the opacity hull is a view-dependent representation.
- the opacity hull captures view-dependent partial occupancy of a foreground object with respect to the background.
- the view-dependent aspect sets the opacity hull apart from voxel shells, which are frequently used in volume graphics, see Udupa et al., “Shell Rendering,” IEEE Computer Graphics & Applications , 13(6):58-67, 1993. Voxel shells are not able to accurately represent fine silhouette features, which is an advantage of our opacity hull.
- textured shells In the prior art, concentric, semi-transparent textured shells have been used to render hair and furry objects, see Lengyel et al., “Real-Time Fur over Arbitrary Surfaces,” Symposium on Interactive 3 D Graphics , pp. 227-232, 2001. They used a geometry called textured fins to improve the appearance of object silhouettes. A single instance of the fin texture is used on all edges of the object.
- opacity hulls can be looked at as textures with view-dependent opacity values for every surface point of the object.
- View dependent opacity can be acquired and warped onto any surface geometry (surface hull) that completely encloses the object, i.e., the surface is said to be watertight.”
- the surface hull can also be constructed from accurate geometry acquired from laser range scanning, or it can be acquired by constructing a bounding box geometry. It is important to not the difference between the opacity hull and the particular method that is used to construct the surface hull 340 of the object onto which the opacity hull is projected 350 .
- opacity hulls can be used to render silhouettes of high complexity.
- the surface radiance images 164 can be used to render the model 180 under a fixed illumination. This places some limitations on the range of applications, because we generate the model with a fixed outgoing radiance function rather than a surface reflectance model.
- the reflection images 165 Similar to constructing the opacity hull, we re-parameterize the acquired reflection images 165 into rays emitted from surface points on the opacity hull. Debevec et al. described surface reflectance fields. However, they acquire and render them from a single viewpoint. In contrast to their system, we acquire the reflectance field for multiple viewing positions around the object 150 .
- the surface reflectance field 251 is a six dimensional function R.
- the function R maps incoming light directions w i to reflected color values along a reflected direction w r .
- R R(P r , w i , w r ).
- R xy (w i , w r ) of the object illuminated from incoming light direction w i During acquisition, we sample the four-dimensional function R xy from a set of viewpoints W r (k), and a set of light directions w i (l).
- An alternative approach is to fit parametric functions for reflectance or BRDFs to the acquired data. This works well for specialized applications. For example, surface reflectance fields of human faces could be acquired, and a parametric function could be fit to the measured reflectance fields. Parametric reflection functions could be-fit for arbitrary materials.
- a pre-processing step we construct the octree-based LDC tree from the opacity hulls using three orthogonal orthographic projections.
- the three orthogonal opacity hulls are sampled into layered depth images. The sampling density depends on the complexity of the model, and is user specified.
- the layered depth images are then merged into a single octree model. Because our opacity hulls are generated from virtual orthographic viewpoints, their registration is exact. This merging also insures that the model is uniformly sampled.
- each surfel (surface element) 400 in the LDC tree stores a location 401 , a surface normal value 402 , and a camera-visibility bit vector 403 . If the surfels have image space resolution, only depth values need to be stored.
- the visibility vector stores a value of one for each camera position from which the surfel is visible, i.e., set of images 161 - 166 .
- the bit vector 403 can be computed during opacity hull construction using epipolar geometry.
- Point samples have several benefits for 3D modeling applications. From a modeling point of view, the point-cloud representation eliminates the need to establish topology or connectivity. This facilitates the fusion of data from multiple sources.
- Point samples also avoid the difficult task of computing a consistent parameterization of the surface for texture mapping.
- point sampled models are able to represent complex organic shapes, such as a Bonsai tree or a feather, more easily than polygonal meshes.
- the raw reflectance image data would require about 76 GB of storage. Storing only the pixel blocks within the object silhouette still would require between 20 and 30 GB, depending on the size of the object.
- PCA principal component analysis
- the reflectance images are subdivided into 8 ⁇ 8 image blocks.
- Each block is then stored as a variable number of eigenvalues and principal components.
- the average number of principal components is typically four to five per block, when we set the global RMS reconstruction error to be within 1% of the average radiance values of all reflectance images.
- PCA analysis typically reduces the amount of reflectance data by a factor of ten.
- the map can be a spherical high-dynamic range light probe image of a natural scene.
- ⁇ i 1 k ⁇ ⁇ ⁇ t ⁇ V t
- Vi are its principal components.
- This direct computation avoids reconstruction of the reflectance data from the PCA basis. Note that we convert a set of reflection images for each viewpoint into one radiance image that shows the object under the new illumination. This computation is performed for each change of the environment map.
- FIG. 6 shows a 2D drawing of the situation.
- the map ⁇ tilde over (T) ⁇ is the parameterization plane of the new environment map.
- the mapping from T to ⁇ tilde over (T) ⁇ can be non-linear, for example, for spherical environment maps.
- a 3D surface point Ton the object is projected onto a pixel of the environment matte E, which stores the parameters of the 2D Gaussian G.
- each surface point is then projected into its four closest alpha mattes, reflection images, and environment mattes.
- interpolation weights w i to interpolate the view-dependent alpha from the alpha mattes and the color radiance values from the reconstructed reflection images.
- To compute the radiance contribution from the environment mattes involves two steps: interpolating new Gaussians ⁇ , and convolving the new Gaussians with the environment map to compute the resulting colors.
- We first interpolate the parameters of k 4 reprojected Gaussians ⁇ i .
- w i we compute linear combinations for the amplitudes a i , and the directional vectors ⁇ tilde over (C) ⁇ i .
- the angular parameters ( ⁇ , ⁇ ) and ⁇ tilde over ( ⁇ ) ⁇ are blended using quaternion interpolation.
- the result is the new Gaussian ⁇ that is an interpolated version of the Gaussians ⁇ tilde over (G) ⁇ , morphed to the new viewpoint. Note that this interpolation needs to be performed on matching Gaussians from the environment mattes.
- FIG. 7 shows a simplified 1D drawing of the matching process.
- the two Gaussians per pixel are classified as reflective ( ⁇ ir ) or transmissive ( ⁇ it ).
- C′′ a r ( ⁇ r , ⁇ tilde over (T) ⁇ )+ a t ( ⁇ t ⁇ tilde over (T) ⁇ ).
- the system is fully automated and is easy to use.
- the opacity hull and a dense set of radiance data remove the requirement of accurate 3D geometry for rendering of objects.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
Description
C=∫ Ω W(ωi)E(ωi)dω
W h(x)=a 1 G 1(x, C 1, σ1, θ1)+a 2 G 2(x, C 2, σ2, θ2).
where G1 and G2 are elliptical, oriented 2D unit-Gaussians, and a1 and a2 are their amplitudes, respectively, x are the camera pixel coordinates, C1 the center of each Gaussian, σi are their standard deviations, and θi their orientations, see Chuang et al., “Environment Matting Extensions: Towards Higher Accuracy and Real-Time Capture,” Computer Graphics, SIGGRAPH 2000 Proceedings, pp. 121-130, 2000, for more details.
where Ci(x, y, n) is the intensity of color channel i=0, 1, 2 at pixel location (x, y), n is a phase difference, and λ is the width of a stripe. To maximize the per-pixel difference between the two backdrops, the patterns are phase shifted by 180 degrees, (n=−1 or +1). The user defines the width of the sinusoidal stripes with the parameters λ. This pattern minimizes spill effects by providing substantially gray illumination.
where
where dA is the solid angle covered by each of the original illumination directions dA=sin Φ, in our case.
where γi are the k eigenvalues we store for each block, and Vi are its principal components. Given a new set of m directional lights {tilde over (L)}l, we can compute the new colors for the pixels C of the block directly as a linear combination of the coefficients of the PCA basis:
C″=a r(Ĝ r,{tilde over (T)})+a t(Ĝ t {tilde over (T)}).
where denotes convolution. The final pixel color C, according to our modeling equation is the sum of the low-resolution reflectance field color C′ and the high-resolution reflectance field color C″.
Effect of the Invention
Claims (12)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10/173,069 US6903738B2 (en) | 2002-06-17 | 2002-06-17 | Image-based 3D modeling rendering system |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10/173,069 US6903738B2 (en) | 2002-06-17 | 2002-06-17 | Image-based 3D modeling rendering system |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20030231175A1 US20030231175A1 (en) | 2003-12-18 |
| US6903738B2 true US6903738B2 (en) | 2005-06-07 |
Family
ID=29733248
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US10/173,069 Expired - Fee Related US6903738B2 (en) | 2002-06-17 | 2002-06-17 | Image-based 3D modeling rendering system |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US6903738B2 (en) |
Cited By (43)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040046864A1 (en) * | 2002-06-19 | 2004-03-11 | Markus Gross | System and method for producing 3D video images |
| US20040095385A1 (en) * | 2002-11-18 | 2004-05-20 | Bon-Ki Koo | System and method for embodying virtual reality |
| US20040219979A1 (en) * | 2003-04-29 | 2004-11-04 | Scott Campbell | System and method for utilizing line-of-sight volumes in a game environment |
| US20040229701A1 (en) * | 2001-10-10 | 2004-11-18 | Gavin Andrew Scott | System and method for dynamically loading game software for smooth game play |
| US20050122521A1 (en) * | 2003-12-09 | 2005-06-09 | Michael Katzlinger | Multimode reader |
| US20050128196A1 (en) * | 2003-10-08 | 2005-06-16 | Popescu Voicu S. | System and method for three dimensional modeling |
| US20060066608A1 (en) * | 2004-09-27 | 2006-03-30 | Harris Corporation | System and method for determining line-of-sight volume for a specified point |
| US20060103671A1 (en) * | 2002-10-30 | 2006-05-18 | Canon Kabushiki Kaisha | Method of background colour removal for porter and duff compositing |
| US20070071341A1 (en) * | 2005-09-23 | 2007-03-29 | Marcus Pfister | Method for combining two images based on eliminating background pixels from one of the images |
| US20070171381A1 (en) * | 2006-01-24 | 2007-07-26 | Kar-Han Tan | Efficient Dual Photography |
| US20080123937A1 (en) * | 2006-11-28 | 2008-05-29 | Prefixa Vision Systems | Fast Three Dimensional Recovery Method and Apparatus |
| US20080158239A1 (en) * | 2006-12-29 | 2008-07-03 | X-Rite, Incorporated | Surface appearance simulation |
| US20080279423A1 (en) * | 2007-05-11 | 2008-11-13 | Microsoft Corporation | Recovering parameters from a sub-optimal image |
| US20090086081A1 (en) * | 2006-01-24 | 2009-04-02 | Kar-Han Tan | Color-Based Feature Identification |
| US20090102857A1 (en) * | 2007-10-23 | 2009-04-23 | Kallio Kiia K | Antialiasing of two-dimensional vector images |
| US20090153673A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method and apparatus for accuracy measuring of 3D graphical model using images |
| US20090244062A1 (en) * | 2008-03-31 | 2009-10-01 | Microsoft | Using photo collections for three dimensional modeling |
| US20090289940A1 (en) * | 2006-11-22 | 2009-11-26 | Digital Fashion Ltd. | Computer-readable recording medium which stores rendering program, rendering apparatus and rendering method |
| US20100033482A1 (en) * | 2008-08-11 | 2010-02-11 | Interactive Relighting of Dynamic Refractive Objects | Interactive Relighting of Dynamic Refractive Objects |
| US20100103169A1 (en) * | 2008-10-29 | 2010-04-29 | Chunghwa Picture Tubes, Ltd. | Method of rebuilding 3d surface model |
| US20100201682A1 (en) * | 2009-02-06 | 2010-08-12 | The Hong Kong University Of Science And Technology | Generating three-dimensional fadeçade models from images |
| US20110025929A1 (en) * | 2009-07-31 | 2011-02-03 | Chenyu Wu | Light Transport Matrix from Homography |
| US20110175912A1 (en) * | 2010-01-18 | 2011-07-21 | Beeler Thabo Dominik | System and method for mesoscopic geometry modulation |
| US8133115B2 (en) | 2003-10-22 | 2012-03-13 | Sony Computer Entertainment America Llc | System and method for recording and displaying a graphical path in a video game |
| US8164590B1 (en) * | 2005-11-23 | 2012-04-24 | Pixar | Methods and apparatus for determining high quality sampling data from low quality sampling data |
| US8204272B2 (en) | 2006-05-04 | 2012-06-19 | Sony Computer Entertainment Inc. | Lighting control of a user environment via a display device |
| US8243089B2 (en) | 2006-05-04 | 2012-08-14 | Sony Computer Entertainment Inc. | Implementing lighting control of a user environment |
| US8284310B2 (en) | 2005-06-22 | 2012-10-09 | Sony Computer Entertainment America Llc | Delay matching in audio/video systems |
| US8289325B2 (en) | 2004-10-06 | 2012-10-16 | Sony Computer Entertainment America Llc | Multi-pass shading |
| US8669981B1 (en) * | 2010-06-24 | 2014-03-11 | Disney Enterprises, Inc. | Images from self-occlusion |
| US8798965B2 (en) | 2009-02-06 | 2014-08-05 | The Hong Kong University Of Science And Technology | Generating three-dimensional models from images |
| US9317970B2 (en) | 2010-01-18 | 2016-04-19 | Disney Enterprises, Inc. | Coupled reconstruction of hair and skin |
| US9342817B2 (en) | 2011-07-07 | 2016-05-17 | Sony Interactive Entertainment LLC | Auto-creating groups for sharing photos |
| US9767580B2 (en) | 2013-05-23 | 2017-09-19 | Indiana University Research And Technology Corporation | Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images |
| US20180221117A1 (en) * | 2013-02-13 | 2018-08-09 | 3Shape A/S | Focus scanning apparatus recording color |
| US10510111B2 (en) | 2013-10-25 | 2019-12-17 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US10559085B2 (en) | 2016-12-12 | 2020-02-11 | Canon Kabushiki Kaisha | Devices, systems, and methods for reconstructing the three-dimensional shapes of objects |
| US10724853B2 (en) | 2017-10-06 | 2020-07-28 | Advanced Scanners, Inc. | Generation of one or more edges of luminosity to form three-dimensional models of objects |
| US10786736B2 (en) | 2010-05-11 | 2020-09-29 | Sony Interactive Entertainment LLC | Placement of user information in a game space |
| RU2745414C2 (en) * | 2016-05-25 | 2021-03-24 | Кэнон Кабусики Кайся | Information-processing device, method of generating an image, control method and storage medium |
| US11429363B2 (en) * | 2017-07-31 | 2022-08-30 | Sony Interactive Entertainment Inc. | Information processing apparatus and file copying method |
| US11439508B2 (en) | 2016-11-30 | 2022-09-13 | Fited, Inc. | 3D modeling systems and methods |
| US11704537B2 (en) | 2017-04-28 | 2023-07-18 | Microsoft Technology Licensing, Llc | Octree-based convolutional neural network |
Families Citing this family (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7206449B2 (en) * | 2003-03-19 | 2007-04-17 | Mitsubishi Electric Research Laboratories, Inc. | Detecting silhouette edges in images |
| GB2412808B (en) * | 2004-04-02 | 2010-03-03 | Canon Europa Nv | Image-based rendering |
| GB0608841D0 (en) * | 2006-05-04 | 2006-06-14 | Isis Innovation | Scanner system and method for scanning |
| US8243071B2 (en) | 2008-02-29 | 2012-08-14 | Microsoft Corporation | Modeling and rendering of heterogeneous translucent materials using the diffusion equation |
| TWI481829B (en) * | 2011-01-07 | 2015-04-21 | Hon Hai Prec Ind Co Ltd | Image off-line programming system and method for simulating illumination environment |
| CN102074045B (en) * | 2011-01-27 | 2013-01-23 | 深圳泰山在线科技有限公司 | System and method for projection reconstruction |
| KR101849696B1 (en) * | 2011-07-19 | 2018-04-17 | 삼성전자주식회사 | Method and apparatus for obtaining informaiton of lighting and material in image modeling system |
| JP2013127774A (en) * | 2011-11-16 | 2013-06-27 | Canon Inc | Image processing device, image processing method, and program |
| FR2986892B1 (en) * | 2012-02-13 | 2014-12-26 | Total Immersion | METHOD, DEVICE AND SYSTEM FOR GENERATING A TEXTURED REPRESENTATION OF A REAL OBJECT |
| US9031357B2 (en) * | 2012-05-04 | 2015-05-12 | Microsoft Technology Licensing, Llc | Recovering dis-occluded areas using temporal information integration |
| US9836846B2 (en) * | 2013-06-19 | 2017-12-05 | Commonwealth Scientific And Industrial Research Organisation | System and method of estimating 3D facial geometry |
| WO2017048674A1 (en) * | 2015-09-14 | 2017-03-23 | University Of Florida Research Foundation, Inc. | Method for measuring bi-directional reflectance distribution function (brdf) and associated device |
| GB2543775B (en) * | 2015-10-27 | 2018-05-09 | Imagination Tech Ltd | System and methods for processing images of objects |
| JP6089133B1 (en) * | 2016-05-23 | 2017-03-01 | 三菱日立パワーシステムズ株式会社 | Three-dimensional data display device, three-dimensional data display method, and program |
| JP6429829B2 (en) | 2016-05-25 | 2018-11-28 | キヤノン株式会社 | Image processing system, image processing apparatus, control method, and program |
| IT201600091510A1 (en) * | 2016-09-12 | 2018-03-12 | Invrsion S R L | System and method for creating three-dimensional models. |
| CN108031588A (en) * | 2017-12-29 | 2018-05-15 | 深圳海桐防务装备技术有限责任公司 | Automatic spray apparatus and use its automatic painting method |
| EP3594736A1 (en) | 2018-07-12 | 2020-01-15 | Carl Zeiss Vision International GmbH | Recording system and adjustment system |
| CN109389113B (en) * | 2018-10-29 | 2020-12-15 | 大连恒锐科技股份有限公司 | A multifunctional footprint collection device |
| US10810759B2 (en) * | 2018-11-20 | 2020-10-20 | International Business Machines Corporation | Creating a three-dimensional model from a sequence of images |
| US11423573B2 (en) * | 2020-01-22 | 2022-08-23 | Uatc, Llc | System and methods for calibrating cameras with a fixed focal point |
| US12181273B2 (en) * | 2020-02-21 | 2024-12-31 | Hamamatsu Photonics K.K. | Three-dimensional measurement device |
| CN117121059A (en) * | 2021-04-07 | 2023-11-24 | 交互数字Ce专利控股有限公司 | Volumetric video with light effects support |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4941041A (en) * | 1989-04-11 | 1990-07-10 | Kenyon Keith E | Pulfrich illusion turntable |
| US6079862A (en) * | 1996-02-22 | 2000-06-27 | Matsushita Electric Works, Ltd. | Automatic tracking lighting equipment, lighting controller and tracking apparatus |
| US6175655B1 (en) * | 1996-09-19 | 2001-01-16 | Integrated Medical Systems, Inc. | Medical imaging system for displaying, manipulating and analyzing three-dimensional images |
| US6201531B1 (en) * | 1997-04-04 | 2001-03-13 | Avid Technology, Inc. | Methods and apparatus for changing a color of an image |
| US6455835B1 (en) * | 2001-04-04 | 2002-09-24 | International Business Machines Corporation | System, method, and program product for acquiring accurate object silhouettes for shape recovery |
| US6515658B1 (en) * | 1999-07-08 | 2003-02-04 | Fujitsu Limited | 3D shape generation apparatus |
-
2002
- 2002-06-17 US US10/173,069 patent/US6903738B2/en not_active Expired - Fee Related
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4941041A (en) * | 1989-04-11 | 1990-07-10 | Kenyon Keith E | Pulfrich illusion turntable |
| US6079862A (en) * | 1996-02-22 | 2000-06-27 | Matsushita Electric Works, Ltd. | Automatic tracking lighting equipment, lighting controller and tracking apparatus |
| US6175655B1 (en) * | 1996-09-19 | 2001-01-16 | Integrated Medical Systems, Inc. | Medical imaging system for displaying, manipulating and analyzing three-dimensional images |
| US6201531B1 (en) * | 1997-04-04 | 2001-03-13 | Avid Technology, Inc. | Methods and apparatus for changing a color of an image |
| US6515658B1 (en) * | 1999-07-08 | 2003-02-04 | Fujitsu Limited | 3D shape generation apparatus |
| US6455835B1 (en) * | 2001-04-04 | 2002-09-24 | International Business Machines Corporation | System, method, and program product for acquiring accurate object silhouettes for shape recovery |
Non-Patent Citations (1)
| Title |
|---|
| Foley et al., "Computer Graphics: Principles and Practice Second Edition in C", 1996, Addison-Wesley, chapters 14 and 16. * |
Cited By (84)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9138648B2 (en) | 2001-10-10 | 2015-09-22 | Sony Computer Entertainment America Llc | System and method for dynamically loading game software for smooth game play |
| US10322347B2 (en) | 2001-10-10 | 2019-06-18 | Sony Interactive Entertainment America Llc | System and method for dynamicaly loading game software for smooth game play |
| US20040229701A1 (en) * | 2001-10-10 | 2004-11-18 | Gavin Andrew Scott | System and method for dynamically loading game software for smooth game play |
| US7034822B2 (en) * | 2002-06-19 | 2006-04-25 | Swiss Federal Institute Of Technology Zurich | System and method for producing 3D video images |
| US20040046864A1 (en) * | 2002-06-19 | 2004-03-11 | Markus Gross | System and method for producing 3D video images |
| US7864197B2 (en) * | 2002-10-30 | 2011-01-04 | Canon Kabushiki Kaisha | Method of background colour removal for porter and duff compositing |
| US20060103671A1 (en) * | 2002-10-30 | 2006-05-18 | Canon Kabushiki Kaisha | Method of background colour removal for porter and duff compositing |
| US7174039B2 (en) * | 2002-11-18 | 2007-02-06 | Electronics And Telecommunications Research Institute | System and method for embodying virtual reality |
| US20040095385A1 (en) * | 2002-11-18 | 2004-05-20 | Bon-Ki Koo | System and method for embodying virtual reality |
| US8062128B2 (en) | 2003-04-29 | 2011-11-22 | Sony Computer Entertainment America Llc | Utilizing line-of-sight vectors in a game environment |
| US20040219979A1 (en) * | 2003-04-29 | 2004-11-04 | Scott Campbell | System and method for utilizing line-of-sight volumes in a game environment |
| US20050128196A1 (en) * | 2003-10-08 | 2005-06-16 | Popescu Voicu S. | System and method for three dimensional modeling |
| US7747067B2 (en) * | 2003-10-08 | 2010-06-29 | Purdue Research Foundation | System and method for three dimensional modeling |
| US8133115B2 (en) | 2003-10-22 | 2012-03-13 | Sony Computer Entertainment America Llc | System and method for recording and displaying a graphical path in a video game |
| US20050122521A1 (en) * | 2003-12-09 | 2005-06-09 | Michael Katzlinger | Multimode reader |
| US7113285B2 (en) * | 2003-12-09 | 2006-09-26 | Beckman Coulter, Inc. | Multimode reader |
| US7098915B2 (en) * | 2004-09-27 | 2006-08-29 | Harris Corporation | System and method for determining line-of-sight volume for a specified point |
| US20060066608A1 (en) * | 2004-09-27 | 2006-03-30 | Harris Corporation | System and method for determining line-of-sight volume for a specified point |
| US8289325B2 (en) | 2004-10-06 | 2012-10-16 | Sony Computer Entertainment America Llc | Multi-pass shading |
| US8284310B2 (en) | 2005-06-22 | 2012-10-09 | Sony Computer Entertainment America Llc | Delay matching in audio/video systems |
| US20070071341A1 (en) * | 2005-09-23 | 2007-03-29 | Marcus Pfister | Method for combining two images based on eliminating background pixels from one of the images |
| US7532770B2 (en) * | 2005-09-23 | 2009-05-12 | Siemens Aktiengesellschaft | Method for combining two images based on eliminating background pixels from one of the images |
| CN1936959B (en) * | 2005-09-23 | 2011-05-18 | 西门子公司 | Method for combining two images based on eliminating background pixels from one of the images |
| US8164590B1 (en) * | 2005-11-23 | 2012-04-24 | Pixar | Methods and apparatus for determining high quality sampling data from low quality sampling data |
| US20090086081A1 (en) * | 2006-01-24 | 2009-04-02 | Kar-Han Tan | Color-Based Feature Identification |
| US20070171381A1 (en) * | 2006-01-24 | 2007-07-26 | Kar-Han Tan | Efficient Dual Photography |
| US8197070B2 (en) | 2006-01-24 | 2012-06-12 | Seiko Epson Corporation | Color-based feature identification |
| US7794090B2 (en) | 2006-01-24 | 2010-09-14 | Seiko Epson Corporation | Efficient dual photography |
| US8243089B2 (en) | 2006-05-04 | 2012-08-14 | Sony Computer Entertainment Inc. | Implementing lighting control of a user environment |
| US8204272B2 (en) | 2006-05-04 | 2012-06-19 | Sony Computer Entertainment Inc. | Lighting control of a user environment via a display device |
| US8325185B2 (en) * | 2006-11-22 | 2012-12-04 | Digital Fashion Ltd. | Computer-readable recording medium which stores rendering program, rendering apparatus and rendering method |
| US20090289940A1 (en) * | 2006-11-22 | 2009-11-26 | Digital Fashion Ltd. | Computer-readable recording medium which stores rendering program, rendering apparatus and rendering method |
| US20100295926A1 (en) * | 2006-11-28 | 2010-11-25 | Prefixa International Inc. | Fast Three Dimensional Recovery Method and Apparatus |
| US7769205B2 (en) | 2006-11-28 | 2010-08-03 | Prefixa International Inc. | Fast three dimensional recovery method and apparatus |
| US20080123937A1 (en) * | 2006-11-28 | 2008-05-29 | Prefixa Vision Systems | Fast Three Dimensional Recovery Method and Apparatus |
| US8121352B2 (en) | 2006-11-28 | 2012-02-21 | Prefixa International Inc. | Fast three dimensional recovery method and apparatus |
| US20080158239A1 (en) * | 2006-12-29 | 2008-07-03 | X-Rite, Incorporated | Surface appearance simulation |
| US9767599B2 (en) * | 2006-12-29 | 2017-09-19 | X-Rite Inc. | Surface appearance simulation |
| US20080279423A1 (en) * | 2007-05-11 | 2008-11-13 | Microsoft Corporation | Recovering parameters from a sub-optimal image |
| US8009880B2 (en) | 2007-05-11 | 2011-08-30 | Microsoft Corporation | Recovering parameters from a sub-optimal image |
| US8638341B2 (en) * | 2007-10-23 | 2014-01-28 | Qualcomm Incorporated | Antialiasing of two-dimensional vector images |
| US20090102857A1 (en) * | 2007-10-23 | 2009-04-23 | Kallio Kiia K | Antialiasing of two-dimensional vector images |
| US20090153673A1 (en) * | 2007-12-17 | 2009-06-18 | Electronics And Telecommunications Research Institute | Method and apparatus for accuracy measuring of 3D graphical model using images |
| US8350850B2 (en) | 2008-03-31 | 2013-01-08 | Microsoft Corporation | Using photo collections for three dimensional modeling |
| US20090244062A1 (en) * | 2008-03-31 | 2009-10-01 | Microsoft | Using photo collections for three dimensional modeling |
| US20100033482A1 (en) * | 2008-08-11 | 2010-02-11 | Interactive Relighting of Dynamic Refractive Objects | Interactive Relighting of Dynamic Refractive Objects |
| US20100103169A1 (en) * | 2008-10-29 | 2010-04-29 | Chunghwa Picture Tubes, Ltd. | Method of rebuilding 3d surface model |
| US20100201682A1 (en) * | 2009-02-06 | 2010-08-12 | The Hong Kong University Of Science And Technology | Generating three-dimensional fadeçade models from images |
| US8798965B2 (en) | 2009-02-06 | 2014-08-05 | The Hong Kong University Of Science And Technology | Generating three-dimensional models from images |
| US9098926B2 (en) | 2009-02-06 | 2015-08-04 | The Hong Kong University Of Science And Technology | Generating three-dimensional façade models from images |
| US20110025929A1 (en) * | 2009-07-31 | 2011-02-03 | Chenyu Wu | Light Transport Matrix from Homography |
| US8243144B2 (en) | 2009-07-31 | 2012-08-14 | Seiko Epson Corporation | Light transport matrix from homography |
| US20110175912A1 (en) * | 2010-01-18 | 2011-07-21 | Beeler Thabo Dominik | System and method for mesoscopic geometry modulation |
| US8670606B2 (en) * | 2010-01-18 | 2014-03-11 | Disney Enterprises, Inc. | System and method for calculating an optimization for a facial reconstruction based on photometric and surface consistency |
| US9317970B2 (en) | 2010-01-18 | 2016-04-19 | Disney Enterprises, Inc. | Coupled reconstruction of hair and skin |
| US11478706B2 (en) | 2010-05-11 | 2022-10-25 | Sony Interactive Entertainment LLC | Placement of user information in a game space |
| US10786736B2 (en) | 2010-05-11 | 2020-09-29 | Sony Interactive Entertainment LLC | Placement of user information in a game space |
| US8669981B1 (en) * | 2010-06-24 | 2014-03-11 | Disney Enterprises, Inc. | Images from self-occlusion |
| US9342817B2 (en) | 2011-07-07 | 2016-05-17 | Sony Interactive Entertainment LLC | Auto-creating groups for sharing photos |
| US20180221117A1 (en) * | 2013-02-13 | 2018-08-09 | 3Shape A/S | Focus scanning apparatus recording color |
| US10383711B2 (en) * | 2013-02-13 | 2019-08-20 | 3Shape A/S | Focus scanning apparatus recording color |
| US12150836B2 (en) | 2013-02-13 | 2024-11-26 | 3Shape A/S | Focus scanning apparatus recording color |
| US12521214B2 (en) | 2013-02-13 | 2026-01-13 | 3Shape A/S | Focus scanning apparatus recording color |
| US10736718B2 (en) | 2013-02-13 | 2020-08-11 | 3Shape A/S | Focus scanning apparatus recording color |
| US9767580B2 (en) | 2013-05-23 | 2017-09-19 | Indiana University Research And Technology Corporation | Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images |
| US11062384B1 (en) | 2013-10-25 | 2021-07-13 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US10510111B2 (en) | 2013-10-25 | 2019-12-17 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US11948186B1 (en) | 2013-10-25 | 2024-04-02 | Appliance Computing III, Inc. | User interface for image-based rendering of virtual tours |
| US12266011B1 (en) | 2013-10-25 | 2025-04-01 | Appliance Computing III, Inc. | User interface for image-based rendering of virtual tours |
| US10592973B1 (en) | 2013-10-25 | 2020-03-17 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US11783409B1 (en) | 2013-10-25 | 2023-10-10 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US11610256B1 (en) | 2013-10-25 | 2023-03-21 | Appliance Computing III, Inc. | User interface for image-based rendering of virtual tours |
| US11449926B1 (en) | 2013-10-25 | 2022-09-20 | Appliance Computing III, Inc. | Image-based rendering of real spaces |
| US11172187B2 (en) | 2016-05-25 | 2021-11-09 | Canon Kabushiki Kaisha | Information processing apparatus, image generation method, control method, and storage medium |
| RU2745414C2 (en) * | 2016-05-25 | 2021-03-24 | Кэнон Кабусики Кайся | Information-processing device, method of generating an image, control method and storage medium |
| US11439508B2 (en) | 2016-11-30 | 2022-09-13 | Fited, Inc. | 3D modeling systems and methods |
| US12109118B2 (en) | 2016-11-30 | 2024-10-08 | Mehmet Erdem AY | 3D modeling systems and methods |
| US10559085B2 (en) | 2016-12-12 | 2020-02-11 | Canon Kabushiki Kaisha | Devices, systems, and methods for reconstructing the three-dimensional shapes of objects |
| US11704537B2 (en) | 2017-04-28 | 2023-07-18 | Microsoft Technology Licensing, Llc | Octree-based convolutional neural network |
| US11429363B2 (en) * | 2017-07-31 | 2022-08-30 | Sony Interactive Entertainment Inc. | Information processing apparatus and file copying method |
| US10724853B2 (en) | 2017-10-06 | 2020-07-28 | Advanced Scanners, Inc. | Generation of one or more edges of luminosity to form three-dimensional models of objects |
| US12169123B2 (en) | 2017-10-06 | 2024-12-17 | Visie Inc. | Generation of one or more edges of luminosity to form three-dimensional models of objects |
| US11852461B2 (en) | 2017-10-06 | 2023-12-26 | Visie Inc. | Generation of one or more edges of luminosity to form three-dimensional models of objects |
| US10890439B2 (en) | 2017-10-06 | 2021-01-12 | Advanced Scanners, Inc. | Generation of one or more edges of luminosity to form three-dimensional models of objects |
Also Published As
| Publication number | Publication date |
|---|---|
| US20030231175A1 (en) | 2003-12-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US6903738B2 (en) | Image-based 3D modeling rendering system | |
| US6831641B2 (en) | Modeling and rendering of surface reflectance fields of 3D objects | |
| US6791542B2 (en) | Modeling 3D objects with opacity hulls | |
| Matusik et al. | Acquisition and rendering of transparent and refractive objects | |
| Matusik et al. | Image-based 3D photography using opacity hulls | |
| US6639594B2 (en) | View-dependent image synthesis | |
| Lensch et al. | Image-based reconstruction of spatial appearance and geometric detail | |
| Unger et al. | Capturing and rendering with incident light fields | |
| US6792140B2 (en) | Image-based 3D digitizer | |
| McAllister et al. | Real-time rendering of real world environments | |
| JP4335589B2 (en) | How to model a 3D object | |
| Guarnera et al. | BxDF material acquisition, representation, and rendering for VR and design | |
| Lensch | Efficient, image-based appearance acquisition of real-world objects | |
| Martos et al. | Realistic virtual reproductions. Image-based modelling of geometry and appearance | |
| Debevec | Image-based techniques for digitizing environments and artifacts | |
| Verbiest et al. | Image-based rendering for photo-realistic visualization | |
| Yu | Modeling and editing real scenes with image-based techniques | |
| Ludwig et al. | Environment map based lighting for reflectance transformation images | |
| Debevec et al. | Digitizing the parthenon: Estimating surface reflectance under measured natural illumination | |
| Tetzlaff | Image-Based Relighting of 3D Objects from Flash Photographs | |
| Ngan | Image-based 3D scanning system using opacity hulls | |
| Gabr | Scene relighting and editing for improved object insertion | |
| Liu et al. | Real-Time Scene Reconstruction using Light Field Probes | |
| Jethwa | Efficient volumetric reconstruction from multiple calibrated cameras | |
| Yerex | Name of Author: Keith Yerex |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC., M Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATUSIK, WOJCIECH;PFISTER, HANSPETER;NGAN, WAI KIT ADDY;AND OTHERS;REEL/FRAME:013034/0335 Effective date: 20020614 |
|
| FPAY | Fee payment |
Year of fee payment: 4 |
|
| REMI | Maintenance fee reminder mailed | ||
| LAPS | Lapse for failure to pay maintenance fees | ||
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
| FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20130607 |