US20010005425A1  Method and apparatus for reproducing a shape and a pattern in a threedimensional scene  Google Patents
Method and apparatus for reproducing a shape and a pattern in a threedimensional scene Download PDFInfo
 Publication number
 US20010005425A1 US20010005425A1 US09/781,328 US78132801A US2001005425A1 US 20010005425 A1 US20010005425 A1 US 20010005425A1 US 78132801 A US78132801 A US 78132801A US 2001005425 A1 US2001005425 A1 US 2001005425A1
 Authority
 US
 United States
 Prior art keywords
 dimensional model
 images
 correspondence
 dimensional
 image
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Granted
Links
 239000002609 media Substances 0 claims description 13
 238000009740 moulding (composite fabrication) Methods 0 claims description 9
 230000036961 partial Effects 0 claims description 8
 230000000875 corresponding Effects 0 description 8
 239000007787 solids Substances 0 description 6
 239000011159 matrix materials Substances 0 description 5
 238000000034 methods Methods 0 description 4
 238000007796 conventional methods Methods 0 description 3
 238000007476 Maximum Likelihood Methods 0 description 2
 238000006243 chemical reaction Methods 0 description 2
 238000004891 communication Methods 0 description 2
 238000007429 general methods Methods 0 description 2
 238000003860 storage Methods 0 description 2
 239000002131 composite material Substances 0 description 1
 230000002708 enhancing Effects 0 description 1
 230000014509 gene expression Effects 0 description 1
 230000015654 memory Effects 0 description 1
Images
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Abstract
A method and apparatus for reproducing a shape and a pattern in a threedimensional scene with good precision, based on a plurality of images, by realizing processing that considers a threedimensional model from the beginning of the processing. A plurality of images are captured, a plurality of frames to which processing is applied are selected from the captured images, correspondence between a plurality of images are extracted, a threedimensional model to be a base is input, a threedimensional model that defines geometric properties in a target object in the images is selected, correspondence between the images and the threedimensional model is specified, the threedimensional model is deformed while satisfying both the correspondence between the images and the correspondence between the images and the threedimensional model, a pattern image to be attached to a surface is generated based on a shape of the threedimensional model and the correspondence between the threedimensional model and the images, and the final threedimensional model is output.
Description
 The present invention relates to processing for reproducing a partial or entire threedimensional scene, in which a plurality of digitized videos or animation appear, into a threedimensional shape, and converting the threedimensional shape into a threedimensional model to be displayed in computer graphics.
 In order to reproduce a threedimensional shape by using a threedimensional scene that is a twodimensional image, in which a threedimensional object such as a plurality of videos or animation appears, it is required to appropriately grasp the correspondence between a plurality of videos or between a plurality of frames (hereinafter, referred to as “between images”).
 FIG. 1 shows a flow chart of processing in a conventional method for reproducing a shape and a pattern in a threedimensional scene. In FIG. 1, when twodimensional data such as a plurality of videos or animation to be reproduced is given (Operation101), the correspondence between images is investigated (Operation 102). According to a conventional method, obtaining the consistency of vertexes at coordinate positions of a threedimensional shape is the mainstream; however, at this point, the consistency or the like of each plane or side, in particular, a pattern or an arbitrary point in each plane is not taken into consideration.
 Only a threedimensional shape is reproduced based on the correspondence between vertexes (Operation103). Herein, a threedimensional shape is reproduced only by using vertexes in accordance with triangulation or the like, and sides or planes are not present. Then, a geometric model is applied (Operation 104), whereby a threedimensional model having planes, sides, or the like is reproduced. Finally, texture is attached to each plane of the threedimensional model to reproduce a pattern (Operation 105), whereby reproduction of the threedimensional shape is completed. The reproduced threedimensional model is output in accordance with a data format of computer graphics to be used (Operation 106), whereby the threedimensional model can be used as a vector value.
 In order to enhance the precision of reproduction of a threedimensional shape in investigation of the correspondence between images and reproduction of a threedimensional shape, an appropriate model has been conventionally utilized often. For example, JP 9 (1997)69170 A discloses a method for generating a threedimensional model, in particular, by applying a plane model containing texture pattern information to a threedimensional shape, and attaching the plane model to each surface of a threedimensional model, so as to reproduce a threedimensional shape.
 However, according to the abovementioned method, a depth and the like in a threedimensional model are not taken into consideration when texture model pattern information to be modeled as a plane model is obtained. This causes a shift between the depth and the pattern image, resulting in an unnatural threedimensional model. Further, in the case where an error is caused in obtaining a threedimensional shape, a portion, which can be approximated merely by being attached to a threedimensional model as a plane model, is also excluded as inappropriate texture pattern information, so that such a portion will not be attached to a threedimensional model.
 The present invention overcomes the abovementioned problem, and its object is to provide a method and apparatus for realizing processing in which a threedimensional model is considered from the beginning of the processing in reproduction of a shape and a pattern in a threedimensional scene based on a plurality of images, and reproducing the shape and the pattern with good precision.
 In order to solve the abovementioned problem, the method for reproducing a shape and a pattern in a threedimensional scene of the present invention is a method for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, including: capturing the plurality of images, storing the captured images, selecting a plurality of frames to which processing is applied from the stored images, and extracting correspondence between the plurality of images; inputting the threedimensional model to be a base, storing the input threedimensional model, selecting the threedimensional model that defines geometric properties of a target object in the images, and specifying correspondence between the images and the threedimensional model; deforming the threedimensional model while satisfying both the obtained correspondence between the images and the obtained correspondence between the images and the threedimensional model; generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and outputting the completed final threedimensional model.
 According to the abovementioned structure, the shape of a threedimensional model can be determined while the correspondence in a pattern is being considered, whereby a shape or the like in a threedimensional scene can be reproduced as a completed natural threedimensional model in which a pattern shift or the like does not occur.
 Further, in the method for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is desirable that, in the extracting of the correspondence between the plurality of images, a user can specify the correspondence. The reason for this is that a user can freely add even correspondence which cannot be detected or is unlikely to be detected by a system, whereby a threedimensional model closer to a real one can be reproduced.
 Further, in the method for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is preferable that, in the extracting of the correspondence between the plurality of images, the correspondence between the plurality of images is determined based on brightness of the images and information obtained by processing the brightness. This is because merely extracting correspondence as a pattern cannot support a change in an image caused by the difference in exposure of a photograph or the like, the strength of a light source, and the like.
 Next, in order to solve the abovementioned problem, the method for reproducing a shape and a pattern in a threedimensional scene of the present invention is a method for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, including: capturing the plurality of images, storing the captured images, and selecting a plurality of frames to which processing is applied from the stored images; inputting the threedimensional model to be a base, storing the input threedimensional model, selecting the threedimensional model that defines geometric properties of a target object in the images, and specifying correspondence between the images and the threedimensional model; deforming the threedimensional model while investigating portions, between the images, which satisfy the obtained correspondence between the images and the threedimensional model; generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and outputting the completed final threedimensional model.
 According to the abovementioned structure, the shape of a threedimensional model can be determined while the correspondence in a pattern is being considered, whereby a shape or the like in a threedimensional scene can be reproduced as a completed natural threedimensional model in which a pattern shift or the like does not occur.
 Further, in the method for reproducing a shape and a pattern of the present invention, it is desirable that, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, a user can specify a partial change in geometric properties of the threedimensional model. The reason for this is that a threedimensional scene is reproduced as a threedimensional model closer to a real one by finely adjusting application of a threedimensional model.
 Further, in the method for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is preferable that, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, the threedimensional model can be replaced. This is because a reproduction precision can be enhanced by providing a chance to use a more appropriate threedimensional model.
 Next, in order to solve the abovementioned problem, the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention is an apparatus for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, including: an image input part for capturing the plurality of images; an image storing part for storing the images captured from the image input part; an image selecting part for selecting a plurality of frames to which processing is applied from the images stored in the image storing part; an interimage correspondence obtaining part for extracting correspondence between the plurality of images; a threedimensional model input part for inputting the threedimensional model to be a base; a threedimensional model storing part for storing the threedimensional model input from the threedimensional mode input part; a threedimensional model selecting part for selecting the threedimensional model that defines geometric properties of a target object in the images; an image—threedimensional model correspondence obtaining part for specifying correspondence between the images and the threedimensional model; a threedimensional information reproducing part for deforming the threedimensional model while satisfying both the correspondence between the images obtained in the interimage correspondence obtaining part and the correspondence between the images and the threedimensional model obtained in the image—threedimensional model correspondence obtaining part; a pattern image generating part for generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained in the threedimensional information reproducing part, and the specified correspondence between the final threedimensional model and the images; and a generated model output part for outputting the completed final threedimensional model.
 According to the abovementioned structure, the shape of a threedimensional model can be determined while the correspondence in a pattern is being considered, whereby a shape or the like in a threedimensional scene can be reproduced as a completed natural threedimensional model in which a pattern shift or the like does not occur.
 Further, in the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is desirable that, in the interimage correspondence obtaining part, a user can specify the correspondence. The reason for this is that a user can freely add even correspondence which cannot be detected or is unlikely to be detected by a system, whereby a threedimensional model closer to a real one can be reproduced.
 Further, in the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is preferable that, in the interimage correspondence obtaining part, the correspondence between the plurality of images is determined based on brightness of the images and information obtained by processing the brightness. This is because merely extracting correspondence as a pattern cannot support a change in an image caused by the difference in exposure of a photograph or the like, the strength of a light source, and the like.
 Next, in order to solve the abovementioned problem, the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention is an apparatus for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, including: an image input part for capturing the plurality of images; an image storing part for storing the images captured from the image input part; an image selecting part for selecting a plurality of frames to which processing is applied from the images stored in the image storing part; a threedimensional model input part for inputting the threedimensional model to be a base; a threedimensional model storing part for storing the threedimensional model input from the threedimensional mode input part; a threedimensional model selecting part for selecting the threedimensional model that defines geometric properties of a target object in the images; an image—threedimensional model correspondence obtaining part for specifying correspondence between the images and the threedimensional model; a part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information for deforming the threedimensional model while investigating portions, between the images, which satisfy the correspondence between the images and the threedimensional model obtained in the image— threedimensional model correspondence obtaining part; a pattern image generating part for generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and the specified correspondence between the final threedimensional model and the images; and a generated model output part for outputting the completed final threedimensional model.
 According to the abovementioned structure, the shape of a threedimensional model can be determined while the correspondence in a pattern is being considered, whereby a shape or the like in a threedimensional scene can be reproduced as a completed natural threedimensional model in which a pattern shift or the like does not occur.
 Further, in the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is desirable that, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, a user can specify a partial change in geometric properties of the threedimensional model. The reason for this is that a threedimensional scene is reproduced as a threedimensional model closer to a real one by finely adjusting application of a threedimensional model.
 Further, in the apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention, it is preferable that, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, the threedimensional model can be replaced. This is because a reproduction precision can be enhanced by providing a chance to use a more appropriate threedimensional model.
 Next, in order to solve the abovementioned problem, the computerreadable recording medium storing a program to be executed by a computer of the present invention is a computerreadable recording medium storing a program to be executed by a computer for creating a threedimensional model used for computer graphics based on a plurality of images, the program including: capturing the plurality of images; storing the captured images; selecting a plurality of frames to which processing is applied from the stored images; extracting correspondence between the plurality of images; inputting the threedimensional model to be a base; storing the input threedimensional model; selecting the threedimensional model that defines geometric properties of a target object in the images; specifying correspondence between the images and the threedimensional model; deforming the threedimensional model while satisfying both the obtained correspondence between the images and the obtained correspondence between the images and the threedimensional model; generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and outputting the completed final threedimensional model.
 According to the abovementioned structure, the shape of a threedimensional model can be determined while the correspondence in a pattern is being considered by loading the program onto a computer for execution, whereby an apparatus for reproducing a shape and a pattern in a threedimensional scene can be realized, which is capable of reproducing a threedimensional scene as a completed natural threedimensional model in which a pattern shift or the like does not occur.
 FIG. 1 is a flow chart of processing in a conventional method for reproducing a shape and a pattern in a threedimensional scene.
 FIG. 2 is a block diagram of an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention.
 FIG. 3 illustrates an image selection instructing window in one example of the present invention.
 FIG. 4 illustrates a threedimensional model selection instructing window in one example of the present invention.
 FIG. 5 illustrates the correspondence between images in one example of the present invention.
 FIG. 6 illustrates the correspondence between the threedimensional model and the image in one example of the present invention.
 FIG. 7 illustrates the correspondence between the reproduced threedimensional model and the image in one example of the present invention.
 FIG. 8 illustrates the correspondence between the reproduced threedimensional model and the image in one example of the present invention.
 FIG. 9 illustrates a reproduced threedimensional mode in one example of the present invention.
 FIG. 10 is a block diagram of an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 2 of the present invention.
 FIG. 11 is a flow chart of processing in an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention.
 FIG. 12 illustrates recording media.
 Hereinafter, an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention will be described with reference to the drawings. FIG. 2 is a block diagram of an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention.
 In FIG. 2, reference numeral201 denotes an image input part, 202 denotes an image storing part, 203 denotes an image selecting part, 204 denotes a threedimensional model input part, 205 denotes a threedimensional model storing part, 206 denotes a threedimensional model selecting part, 207 denotes an interimage correspondence obtaining part, 208 denotes an image—threedimensional model correspondence obtaining part, 209 denotes a threedimensional information reproducing part, 210 denotes a pattern image creating part, and 211 denotes a generated model output part, respectively.
 First, in the image input part201, a plurality of images to be used are captured in the image storing part 202 as a video image or a plurality of still images by using an image input apparatus or the like. Next, in the image selecting part 203, a user selects a plurality of frames to which processing is applied.
 On the other hand, in the threedimensional model input part204, a threedimensional model of an object assumed as a target is input, and stored in the threedimensional model storing part 205. Next, in the threedimensional model selecting part 206, the user selects a threedimensional model that best fits the object in the target image from a model group stored in the threedimensional model storing part 205.
 In the interimage correspondence obtaining part207, the user specifies portions corresponding to each other between a plurality of images, and in the image—threedimensional model correspondence obtaining part 208, the user also specifies portions corresponding to each other between each image and the threedimensional model. For example, this corresponds to that the user specifies the identical position determined based on a pattern, fineadjusts the positions of vertexes in accordance with the threedimensional model, and the like.
 When each correspondence is determined, in the threedimensional information reproducing part209, the threedimensional model is deformed so as to satisfy the correspondence. Then, the final shape of the threedimensional model is established, and the final correspondence between the threedimensional model and each image is also established.
 Finally, in the pattern image generating part210, a pattern image to be attached to each surface of the threedimensional model is generated based on the final correspondence between the threedimensional model and each image, and in the generated model output part 211, the pattern image is converted into a data format of a threedimensional model display system to be output.
 Next, an example of the apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 will be described with reference to the drawings.
 In FIG. 2, in the image input part201, an image is input by using a video camera, a CCD camera, or the like. In this case, a video image may be or may not be decomposed into frames. However, in the case where an image is selected in the image selecting part 203, it is required to specify a target frame. Therefore, it is required to provide a function capable of specifying a frame in accordance with user's instruction. Such a function may be equal to a frameadvance display function of a commercially available video deck.
 FIG. 3 illustrates an image selection specifying window. In the case where there are a plurality of video images and the like, the use of such an interface is useful for a user to select a video image while browsing through them.
 Further, as a threedimensional model that expresses a target object, a rectangular solid, a cone, a cylinder, a sphere, and the like are prepared. The threedimensional model is not limited thereto. A composite of these shapes may be used.
 Further, a threedimensional model is all expressed in a polygon format. Thus, in the same way as in a data format in conventional computer graphics, a mode is expressed by threedimensional coordinates in a threedimensional model coordinate system.
 For selection of a threedimensional model, a model selection instructing window as shown in FIG. 4 should be prepared. The reason for this is to enable a user to select the most appropriate model while browsing through the threedimensional models.
 Further, geometric properties of a threedimensional model are expressed by using a model variation rule. For example, in accordance with a variation rule in which regarding a rectangular solid, each plane can be moved only in a direction vertical to the counter plane, regarding a cylinder, two respective bottom planes can be moved only in a direction vertical to the counter bottom plane, regarding a cone, a unique vertex that does not belong to the bottom plane can be moved only in a direction vertical to the bottom plane, and the like, deformation can be expressed while maintaining geometric properties of a threedimensional model.
 Next, the correspondence between images will be described with reference to FIG. 5. As shown in FIG. 5, in the case of selecting two images of a building seen at different angles as processing target frames, the correspondence therebetween can be determined when a user specifies the identical portion of two images, using a mouse or the like. For example, in FIG. 5, a white X mark is put in the vicinity of the center of two images, respectively. The correspondence between two images is established by the X mark. The correspondence can easily be specified by an interface such as a mouse. More specifically, the correspondence is composed of an identifier of one image, image coordinates of the corresponding portion on one image, an identifier of the other image, and image coordinates of the corresponding portion on the other image. At least one correspondence should be specified for reproduction of one object. A plurality of correspondences may be specified.
 Further, based on the correspondence between the image and the threedimensional model, it is made clear where the threedimensional model is positioned on each frame. Considering the case where a threedimensional model is applied to an image exhibiting a simple rectangular solid as shown in FIG. 6, a threedimensional model is expressed by a wire frame, and the threedimensional model is deformed so that vertexes correspond to an outline of the object in the target image by dragging a mouse or the like. Finally, as shown in FIG. 6, the outline of the target object in the image should correspond to the wire frame.
 Next, a method for determining a threedimensional model that satisfies both the interimage correspondence and the correspondence between the image and the threedimensional model will be described in detail. As a threedimensional model to be illustrated, a rectangular solid similar to that in FIG. 6 will be considered.
 In this case, 8 vertexes can be expressed by (0, 0, 0), (1, 0, 0), (0, h, 0), (1, h, 0), (0, 0, d), (1, 0, d), (0, h, d), and (1, h, d) in a threedimensional coordinate system, respectively. Herein, h and d are variables, h represents a height, and d represents a depth. By determining h and d, the shape of a target object can be determined as an entity of a rectangular solid model.
 Similarly, in the case of a regular nangular prism, vertexes can be expressed by (cos (2πi/n), sin (2πi/n), 0), (cos (2πi/n), sin (2πi/n), d) (i is an integer, i=0, . . . , n−1). In the case of a regular npyramid, vertexes can be expressed by (cos (2πi/n), sin (2πi/n), 0) (i is an integer, i=0, . . . , n−1), (0, 0, d). In any case, if a variable d is determined, the shape of a target object can be determined as an entity of a threedimensional model. Regarding the other threedimensional models, parameters similar to d and h are prepared, whereby the shape of a target object can be determined as an entity of a threedimensional model.
 The abovementioned vertex has a vertex number for identifying each vertex. Each plane of a model can be expressed by a graph of vertex numbers. In a model of a rectangular solid, if a vertex number is provided to each vertex as follows: 0:(0, 0, 0), 1:(1, 0, 0), 2:(0, h, 0), 3:(1, h, 0), 4: (0, 0, d), 5:(1, 0, d), 6: 0, h, d), 7:(1, h, d), each plane is composed of four vertexes. Therefore, each plane can be expressed by being assigned a plane number as follows: 0: 0132, 1:4576, 2:0154, 3:2376, 4:0264, and 5:1375.
 In order to specify the correspondence between each image and the threedimensional model, it is required to project a threedimensional model onto an image by perspective projection. Since there are a plurality of basic images, a frame number is provided to each image, and a variable representing the perspective projection process is determined while adjusting the abovementioned variables h and d, whereby a solution satisfying the correspondence between each image and the threedimensional model can be obtained.
 When the perspective projection process is expressed by a mathematical expression, vertexes (x, y, z) of a threedimensional model are projected onto points (r, c) on an image as expressed by Equation 1.
$\begin{array}{cc}\left(\begin{array}{c}r\\ c\end{array}\right)=\left(\begin{array}{c}f\ue89e\frac{Y}{Z}\\ f\ue89e\frac{X}{Z}\end{array}\right),\left(\begin{array}{c}X\\ Y\\ Z\end{array}\right)=A\ue89e\text{\hspace{1em}}\ue89e\left(\begin{array}{c}x\\ y\\ z\end{array}\right)+0& \left(1\right)\end{array}$  Where r and c are a row number and a column number of an image, respectively, A is a rotating matrix of 3×3, and 0 is an origin vector representing movement on a screen. Thus, a rotation matrix is expressed by three rotation angles and the order of the origin vector is 3; therefore, it is clear that r and c are composed of three variables. Further, f represents a focal length.
 Further, in order to express the correspondence between images, it should be considered that coordinates on a threedimensional model of the corresponding point are (x, y, z), and a projection position based on a projection equation represented by Equation 1 becomes a specified coordinate.
 More specifically, in the case where there are p correspondences, the qth (q=0, . . . ,p−1) correspondence is composed of the following elements. That is, the qth correspondence is composed of coordinates (x_{q}, y_{q}, z_{q}) of an arbitrary point in a threedimensional model coordinate system, a plane number S_{q }of a plane on which the point is present, one frame number I_{q,1 }at which the point is observed, a projection position (r_{q,1}, c_{q,1}) in the image, the other frame number i_{q,2}, c_{1,2 }at which the point is observed, and a projection position (r_{q,2}, c_{q,2}) in the image. Among them, (x_{q}, y_{q}, z_{q}) is an unknown number, so that one set of correspondence has three unknown numbers.
 From the abovementioned consideration, unknown numbers are as follows: shape parameters h and d of a model or a shape parameter d of a model, a focal length f_{i }(i=0, . . . , n−1) of n frames, a rotating matrix A_{1}(i=0, . . . , n−1), an origin vector O_{i }(i=0, . . . , n−1), and correspondence coordinates of p correspondences (x_{q}, y_{q}, z_{q}) (q=0, . . . , p−1). Thus, the number of unknown numbers becomes 2+7n+3p or 1+7n+3p.
 In contrast, conditions to be given are as follows: specifically, in the case where a model is composed of m vertexes, coordinates (r_{ij}, c_{ij}) (i=0, . . . , n− 1, j=0, . . . , m−1) of an image in which each vertex P_{j}=(x_{j}, y_{j}, z_{j}) (j=0, . . . , m−1) of a model is seen on an ith frame, a plane number S_{q }having a point (x_{q}, y_{q}, z_{q}) at which the correspondence is given with respect to a qth (q=0, . . . , p−1 ) correspondence, one frame number I_{q,1}, a projection position (r_{q,1}, c_{q,1}) on the frame, and the other frame number i_{q,2 }and a projection position (r_{q,2}, c_{q,2}) on the frame.
 Thus, assuming that a conversion equation in which the point (x, y, z) in a threedimensional model coordinate system is projected to the ith frame is expressed as F (x, y, z; f_{i}, A_{i}, O_{i}) by using the focal length f_{i}, the rotating matrix A_{i}, and the origin vector O_{i}, Simultaneous Equations 2 hold.
 F(x _{j} ,y _{j} ,z _{j} ;f _{1},A_{1},O_{1})=(r _{y} , c _{y})(i=0, . . . , n−1, j=0, . . . , m−1)
 F(x _{k} ,y _{k} ,z _{k} ;f _{1h.1},A_{1h,1},O_{1k,1})=(r _{k,1} ,c _{k,1})(k=0, . . . , l−1),
 F(x _{k} ,y _{k} ,z _{k} ;f _{1h 2},A_{1h.2},O_{1h 2})=(r_{k,2} , c _{k,2})(k=0, . . . , l−1) (2)
 Further, as restricting conditions, assuming that an equation of an Sth plane is expressed as S (x, y, z; s), Equation 3 holds.
 S(x _{k} , y _{k} ,z _{k} ;s _{k})=0(k=0, . . . , l−1) (3)
 By solving the abovementioned equations, shape parameters of the threedimensional model can be determined, and incidentally, a projection equation of a threedimensional mode to an image frame, and coordinates of the corresponding point can be obtained.
 The abovementioned simultaneous equations can be solved in the case where the number of these equations is equal to or larger than the number of variables. A solution is not particularly limited. A general method of least squares may be used, or various methods such as maximum likelihood estimation, LevenbergMarquart method, and robust estimation may be used.
 When the threedimensional model is determined in the abovementioned procedure, a pattern image is obtained based on the correspondence between the final threedimensional model and the image. FIG. 9 shows an example of a generated pattern image in the case where the final threedimensional model is determined as represented by white lines in FIGS. 7 and 8.
 The pattern of a threedimensional model is obtained by converting given image information to an image in the case where each plane of the threedimensional model is observed from a front side. In the case where an partial image is seen in a plurality of frames as shown in FIGS. 7 and 8, a pattern image as shown in FIG. 9 should be generated as an overlapping image of each front side of the frames. A conversion equation of such image information can be derived from parameters representing a projection process and projection geometry.
 Information regarding the threedimensional model thus obtained is composed of coordinates of each vertex of the threedimensional model, a graph of vertexes forming sides and planes, and a pattern image attached to each plane. Such information should be output in accordance with the existing format such as VRML (Virtual Reality Modeling Language).
 As described above, in Embodiment 1, the shape of a threedimensional model can be determined while the correspondence of a pattern is also taken into consideration, and the shape and the like in a threedimensional scene can be reproduced as a natural threedimensional model in which a shift and the like of a pattern does not occur in the completed threedimensional model.
 Embodiment 2 of the present invention will be described with reference to the drawings.
 FIG. 10 is a block diagram of an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 2 of the present invention. In FIG. 10, reference numeral301 denotes an image input part, 302 denotes an image storing part, 303 denotes an image selecting part, 304 denotes a threedimensional model input part, 305 denotes a threedimensional model storing part, 306 denotes a threedimensional model selecting part, 307 denotes an image—threedimensional model correspondence obtaining part, 308 denotes a part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, 309 denotes a pattern image creating part, and 310 denotes a generated model output part, respectively.
 FIG. 10 is different from FIG. 2 that is a block diagram of an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention, mainly in a method for obtaining the correspondence between images and between the image and the threedimensional model.
 In FIG. 10, first, in the same way as in FIG. 2, in the image input part301, a plurality of images to be used are captured in the image storing part 302 as a video image or a plurality of still images by using an image input apparatus or the like. Next, in the image selecting part 303, a user selects a plurality of frames to which processing is applied.
 On the other hand, in the threedimensional model input part304, a threedimensional model of an object assumed as a target is input, and stored in the threedimensional model storing part 305. Next, in the threedimensional model selecting part 306, the user selects a threedimensional model that best fits the object in the target image from a model group stored in the threedimensional model storing part 305.
 In the image—threedimensional model correspondence obtaining part307, the user specifies portions corresponding to each other between the selected respective images and the threedimensional model. For example, this corresponds to that the user specifies the identical position determined based on a pattern, fineadjusts the position of a vertex in accordance with the threedimensional model, and the like.
 When the correspondence between each image and the threedimensional model is determined, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information308, the threedimensional model is deformed so as to satisfy the correspondence between each image and the threedimensional model while investigating portions corresponding to each other between the respective images. Then, the final shape of the threedimensional model is established, and the final correspondence between the threedimensional model and the image is also established.
 Finally, in the pattern image generating part309, a pattern image to be attached to each surface of the threedimensional model is generated based on the final correspondence between the threedimensional model and the image, and in the generated model output part 310, the pattern image is converted into a data format of a threedimensional model display system to be output.
 As described above, regarding a method for determining a threedimensional model in the case where investigation of the correspondence between the images and reproduction of threedimensional information are simultaneously executed, the correspondence shown in Embodiment 1 is not used, but a condition that pattern information with respect to the identical portion in the threedimensional model is equal between frames should be mathematically expressed and added to simultaneous equations described in Embodiment 1.
 For example, it is assumed that an arbitrary point (x, y, z) on a threedimensional model can be expressed by a threedimensional vector function C (x, y, z; a_{i}) on an ith frame. Herein, a_{i }is a parameter vector that changes a color of a threedimensional model to a color seen in the ith frame, which implies adjustment or the like in the case where the brightness is changed by adjustment or the like of a camera diaphragm when each frame is shot.
 In the case where a point (r_{i}, c_{i}) in the ith (0≦i≦n−1) frame is present on a plane s, coordinates (x, y, z) on the threedimensional model can be obtained by solving Simultaneous Equations 4.
$\begin{array}{cc}\{\begin{array}{c}\left(\begin{array}{c}{r}_{i}\\ {c}_{i}\end{array}\right)=\left(\begin{array}{c}{f}_{i}\ue89e\frac{Y}{Z}\\ {f}_{i}\ue89e\frac{X}{Z}\end{array}\right),\\ \left(\begin{array}{c}X\\ Y\\ Z\end{array}\right)={A}_{i}\ue89e\text{\hspace{1em}}\ue89e\left(\begin{array}{c}x\\ y\\ z\end{array}\right)+{O}_{i},\\ S\ue8a0\left(x,y,z;s\right)=0\end{array}& \left(4\right)\end{array}$  Further, an image is projected onto a point (r_{j}, c_{j}) that satisfies Simultaneous Equations 5 in a jth (j≠i, 0≦j≦n−1) frame.
$\begin{array}{cc}\{\begin{array}{c}\left(\begin{array}{c}{r}_{j}\\ {c}_{j}\end{array}\right)=\left(\begin{array}{c}{f}_{j}\ue89e\frac{Y}{Z}\\ {f}_{j}\ue89e\frac{X}{Z}\end{array}\right),\\ \left(\begin{array}{c}X\\ Y\\ Z\end{array}\right)={A}_{j}\ue89e\text{\hspace{1em}}\ue89e\left(\begin{array}{c}x\\ y\\ z\end{array}\right)+{O}_{j},\end{array}& \left(5\right)\end{array}$  Where, it is assumed that a color in the ith frame (r, c) is expressed by a vector I_{1 }(r, c), a matching condition of the previous pattern can be expressed by I_{i }(r_{i}, c_{i})=C (x, y, z; a_{i}), I_{j }(r_{j}, c_{j})=C (x, y, z; a_{j}). By setting these equations with respect to all the points in a range contained in a threedimensional model on a screen, simultaneous equations can be formulated. Then, by solving these simultaneous equations simultaneously with Simultaneous Equations 2, the following unknown numbers of a threedimensional model and a projection can be obtained without determining the correspondence between the respective images.
 Specifically, the unknown numbers are shape parameters h and d of a model or a shape parameter d of a model, a focal length f_{i }(i=0, . . . , n−1) of n frames, a rotating matrix A_{i }(i=0, . . . , n−1), and an origin vector O_{i }(i=0, . . . , n−1).
 In order to solve these simultaneous equations, it is required that, among the equations, the number of equations regarding a pattern matching condition is larger than n times of the order of a_{i}. A solution is not particularly limited. A general method of least squares may be used, or various methods such as maximum likelihood estimation, LevenbergMarquart method and robust estimation may be used.
 As described above, in Embodiment 2, in order to determined the shape of a threedimensional model, the correspondence of a pattern is not required to be determined previously. A threedimensional model is determined while considering the shape of a threedimensional model at a time of reproduction of the threedimensional model. Thus, the shape and the like in a threedimensional scene can be reproduced as a natural threedimensional model in which a shift or the like of a pattern does not occur in the completed threedimensional model.
 Next, processing operations of a program of realizing an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention will be described. FIG. 11 is a flow chart illustrating processing of a program for realizing an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention.
 Referring to FIG. 11, a plurality of target images are input (Operation401). Thereafter, the correspondence between a plurality of images is investigated on one hand (Operation 402), and an appropriate threedimensional model is applied so as to determine the correspondence between each image and the threedimensional model on the other hand (Operation 403). When each correspondence is determined, the threedimensional model is deformed while satisfying the correspondence to reproduce threedimensional information (Operation 404). Finally, a pattern image is attached to each surface of a threedimensional model based on each correspondence (Operation 405), and a threedimensional model is output (Operation 406).
 Examples of a recording medium storing a program for realizing an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention include a storage apparatus121 provided at the end of a communication line and a recording medium 123 such as a hard disk and a RAM of a computer, as well as a portable recording medium 122 such as a CDROM 1221 and a floppy disk 1222, as illustrated by examples of recording media in FIG. 12. In execution, the program is loaded and executed on a main memory.
 Further, examples of a recording medium storing threedimensional model data generated by an apparatus for reproducing a shape and a pattern in a threedimensional scene in Embodiment 1 of the present invention include a storage apparatus121 provided at the end of a communication line and a recording medium 123 such as a hard disk and a RAM of a computer, as well as a portable recording medium 122 such as a CDROM 1221 and a floppy disk 1222, as illustrated by examples of recording media in FIG. 12. For example, the recording medium is read by a computer when an apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention is used.
 As described above, according to the method and apparatus for reproducing a shape and a pattern in a threedimensional scene of the present invention, the shape of a threedimensional model can be deformed while the consistency of a pattern on the surface of the threedimensional model is being taken into consideration. Therefore, it becomes possible to reproduce a threedimensional model that is close to an entity of a threedimensional scene.
Claims (17)
1. A method for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, comprising:
capturing the plurality of images, storing the captured images, selecting a plurality of frames to which processing is applied from the stored images, and extracting correspondence between the plurality of images;
inputting the threedimensional model to be a base, storing the input threedimensional model, selecting the threedimensional model that defines geometric properties of a target object in the images, and specifying correspondence between the images and the threedimensional model;
deforming the threedimensional model while satisfying both the obtained correspondence between the images and the obtained correspondence between the images and the threedimensional model;
generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and
outputting the completed final threedimensional model.
2. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the extracting of the correspondence between the plurality of images, a user can specify the correspondence.
claim 1
3. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the extracting of the correspondence between the plurality of images, the correspondence between the plurality of images is determined based on brightness of the images and information obtained by processing the brightness.
claim 1
4. A method for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, comprising:
capturing the plurality of images, storing the captured images, and selecting a plurality of frames to which processing is applied from the stored images;
inputting the threedimensional model to be a base, storing the input threedimensional model, selecting the threedimensional model that defines geometric properties of a target object in the images, and specifying correspondence between the images and the threedimensional model;
deforming the threedimensional model while investigating portions, between the images, which satisfy the obtained correspondence between the images and the threedimensional model;
generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and
outputting the completed final threedimensional model.
5. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, a user can specify a partial change in geometric properties of the threedimensional model.
claim 1
6. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, a user can specify a partial change in geometric properties of the threedimensional model.
claim 4
7. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, the threedimensional model can be replaced.
claim 1
8. A method for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the deforming of the threedimensional model and in the generating of a pattern image to a surface of the threedimensional model, the threedimensional model can be replaced.
claim 4
9. An apparatus for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, comprising:
an image input part for capturing the plurality of images;
an image storing part for storing the images captured from the image input part;
an image selecting part for selecting a plurality of frames to which processing is applied from the images stored in the image storing part;
an interimage correspondence obtaining part for extracting correspondence between the plurality of images;
a threedimensional model input part for inputting the threedimensional model to be a base;
a threedimensional model storing part for storing the threedimensional model input from the threedimensional mode input part;
a threedimensional model selecting part for selecting the threedimensional model that defines geometric properties of a target object in the images;
an image—threedimensional model correspondence obtaining part for specifying correspondence between the images and the threedimensional model;
a threedimensional information reproducing part for deforming the threedimensional model while satisfying both the correspondence between the images obtained in the interimage correspondence obtaining part and the correspondence between the images and the threedimensional model obtained in the image—threedimensional model correspondence obtaining part;
a pattern image generating part for generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained in the threedimensional information reproducing part, and the specified correspondence between the final threedimensional model and the images; and
a generated model output part for outputting the completed final threedimensional model.
10. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the interimage correspondence obtaining part, a user can specify the correspondence.
claim 9
11. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the interimage correspondence obtaining part, the correspondence between the plurality of images is determined based on brightness of the images and information obtained by processing the brightness.
claim 9
12. An apparatus for reproducing a shape and a pattern in a threedimensional scene forming a threedimensional model used for computer graphics based on a plurality of images, comprising:
an image input part for capturing the plurality of images;
an image storing part for storing the images captured from the image input part;
an image selecting part for selecting a plurality of frames to which processing is applied from the images stored in the image storing part;
a threedimensional model input part for inputting the threedimensional model to be a base;
a threedimensional model storing part for storing the threedimensional model input from the threedimensional mode input part;
a threedimensional model selecting part for selecting the threedimensional model that defines geometric properties of a target object in the images;
an image—threedimensional model correspondence obtaining part for specifying correspondence between the images and the threedimensional model;
a part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information for deforming the threedimensional model while investigating portions, between the images, which satisfy the correspondence between the images and the threedimensional model obtained in the image—threedimensional model correspondence obtaining part;
a pattern image generating part for generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and the specified correspondence between the final threedimensional model and the images; and
a generated model output part for outputting the completed final threedimensional model.
13. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, a user can specify a partial change in geometric properties of the threedimensional model.
claim 9
14. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, a user can specify a partial change in geometric properties of the threedimensional model.
claim 12
15. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, the threedimensional model can be replaced.
claim 9
16. An apparatus for reproducing a shape and a pattern in a threedimensional scene according to , wherein, in the part for simultaneously executing investigation of interimage correspondence and reproduction of threedimensional information, and in the pattern image generating part, the threedimensional model can be replaced.
claim 12
17. A computerreadable recording medium storing a program to be executed by a computer for creating a threedimensional model used for computer graphics based on a plurality of images, the program comprising:
capturing the plurality of images;
storing the captured images;
selecting a plurality of frames to which processing is applied from the stored images;
extracting correspondence between the plurality of images;
inputting the threedimensional model to be a base;
storing the input threedimensional model;
selecting the threedimensional model that defines geometric properties of a target object in the images;
specifying correspondence between the images and the threedimensional model;
deforming the threedimensional model while satisfying both the obtained correspondence between the images and the obtained correspondence between the images and the threedimensional model;
generating a pattern image to be attached to a surface of the threedimensional model, based on a shape of the final threedimensional model obtained by the deforming, and the specified correspondence between the final threedimensional model and the images; and
outputting the completed final threedimensional model.
Priority Applications (3)
Application Number  Priority Date  Filing Date  Title 

JP10234759  19980820  
JP23475998A JP3954211B2 (en)  19980820  19980820  Method and apparatus for restoring shape and pattern in 3D scene 
JPPCT/JP99/03437  19990625 
Related Parent Applications (1)
Application Number  Title  Priority Date  Filing Date  

JPPCT/JP99/03437 Continuation  19990625 
Publications (2)
Publication Number  Publication Date 

US20010005425A1 true US20010005425A1 (en)  20010628 
US6396491B2 US6396491B2 (en)  20020528 
Family
ID=16975917
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

US09/781,328 Expired  Fee Related US6396491B2 (en)  19980820  20010213  Method and apparatus for reproducing a shape and a pattern in a threedimensional scene 
Country Status (2)
Country  Link 

US (1)  US6396491B2 (en) 
JP (1)  JP3954211B2 (en) 
Cited By (12)
Publication number  Priority date  Publication date  Assignee  Title 

US20020050988A1 (en) *  20000328  20020502  Michael Petrov  System and method of threedimensional image capture and modeling 
US20030179218A1 (en) *  20020322  20030925  Martins Fernando C. M.  Augmented reality system 
US20040105573A1 (en) *  20021015  20040603  Ulrich Neumann  Augmented virtual environments 
US20040155886A1 (en) *  20021119  20040812  Fujitsu Limited  Image simulation processing method, processing apparatus, and program 
US20050089212A1 (en) *  20020327  20050428  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US20060023228A1 (en) *  20040610  20060202  Geng Zheng J  Custom fit facial, nasal, and nostril masks 
US7193633B1 (en) *  20000427  20070320  Adobe Systems Incorporated  Method and apparatus for image assisted modeling of threedimensional scenes 
US20130235079A1 (en) *  20110826  20130912  Reincloud Corporation  Coherent presentation of multiple reality and interaction models 
US20130257868A1 (en) *  20110407  20131003  Ebay Inc.  Item model based on descriptor and images 
US20140147014A1 (en) *  20111129  20140529  Lucasfilm Entertainment Company Ltd.  Geometry tracking 
US9171425B2 (en)  20070713  20151027  INGENIO, Filiale de LotoQuébec Inc.  Gaming device with interactive spin action visual effects 
US20160012276A1 (en) *  20060502  20160114  Digitalglobe, Inc.  Advanced semiautomated vector editing in two and three dimensions 
Families Citing this family (22)
Publication number  Priority date  Publication date  Assignee  Title 

US6954217B1 (en) *  19990702  20051011  Pentax Corporation  Image processing computer system for photogrammetric analytical measurement 
US6819318B1 (en) *  19990723  20041116  Z. Jason Geng  Method and apparatus for modeling via a threedimensional image mosaic system 
JP2001209369A (en) *  20000125  20010803  Mitsubishi Electric Corp  Data generation method and recording medium for three dimensional graphics 
JP4035978B2 (en)  20011005  20080123  コニカミノルタホールディングス株式会社  Threedimensional shape model evaluation method, generation method, and apparatus 
US7289662B2 (en) *  20021207  20071030  Hrl Laboratories, Llc  Method and apparatus for apparatus for generating threedimensional models from uncalibrated views 
JP3974135B2 (en) *  20050125  20070912  株式会社コナミデジタルエンタテインメント  Program, automatic arrangement method, and image generation apparatus 
US8078436B2 (en)  20070417  20111213  Eagle View Technologies, Inc.  Aerial roof estimation systems and methods 
US8145578B2 (en) *  20070417  20120327  Eagel View Technologies, Inc.  Aerial roof estimation system and method 
US8731234B1 (en)  20081031  20140520  Eagle View Technologies, Inc.  Automated roof identification systems and methods 
US8170840B2 (en) *  20081031  20120501  Eagle View Technologies, Inc.  Pitch determination systems and methods for aerial roof estimation 
US8209152B2 (en)  20081031  20120626  Eagleview Technologies, Inc.  Concurrent display systems and methods for aerial roof estimation 
US8401222B2 (en)  20090522  20130319  Pictometry International Corp.  System and process for roof measurement using aerial imagery 
WO2011094760A2 (en) *  20100201  20110804  Eagle View Technologies  Geometric correction of rough wireframe models derived from photographs 
JP5439277B2 (en) *  20100511  20140312  日本電信電話株式会社  Position / orientation measuring apparatus and position / orientation measuring program 
US10515414B2 (en)  20120203  20191224  Eagle View Technologies, Inc.  Systems and methods for performing a risk management assessment of a property 
US8774525B2 (en)  20120203  20140708  Eagle View Technologies, Inc.  Systems and methods for estimation of building floor area 
US9599466B2 (en)  20120203  20170321  Eagle View Technologies, Inc.  Systems and methods for estimation of building wall area 
US9933257B2 (en)  20120203  20180403  Eagle View Technologies, Inc.  Systems and methods for estimation of building wall area 
US9501700B2 (en)  20120215  20161122  Xactware Solutions, Inc.  System and method for construction estimation using aerial images 
US9959581B2 (en)  20130315  20180501  Eagle View Technologies, Inc.  Property management on a smartphone 
EP3028464B1 (en)  20130802  20190501  Xactware Solutions Inc.  System and method for detecting features in aerial images using disparity mapping and segmentation techniques 
US10503843B2 (en)  20171219  20191210  Eagle View Technologies, Inc.  Supervised automatic roof modeling 
Family Cites Families (8)
Publication number  Priority date  Publication date  Assignee  Title 

EP0726543B1 (en)  19950207  20030910  Canon Kabushiki Kaisha  Image processing method and apparatus therefor 
JP3703178B2 (en)  19950901  20051005  キヤノン株式会社  Method and apparatus for reconstructing shape and surface pattern of 3D scene 
JPH0997344A (en)  19950929  19970408  Fujitsu Ltd  Method and system for texture generation 
JP3596959B2 (en)  19951117  20041202  富士通株式会社  Texture editing system 
US6304263B1 (en) *  19960605  20011016  Hyper3D Corp.  Threedimensional display system: apparatus and method 
CA2250021C (en) *  19970519  20070206  Matsushita Electric Industrial Co., Ltd.  Graphic display apparatus, synchronous reproduction method, and av synchronous reproduction apparatus 
JP3813343B2 (en) *  19970909  20060823  三洋電機株式会社  3D modeling equipment 
US6342887B1 (en) *  19981118  20020129  Earl Robert Munroe  Method and apparatus for reproducing lighting effects in computer animated objects 

1998
 19980820 JP JP23475998A patent/JP3954211B2/en not_active Expired  Fee Related

2001
 20010213 US US09/781,328 patent/US6396491B2/en not_active Expired  Fee Related
Cited By (36)
Publication number  Priority date  Publication date  Assignee  Title 

US20060232583A1 (en) *  20000328  20061019  Michael Petrov  System and method of threedimensional image capture and modeling 
US7474803B2 (en)  20000328  20090106  Enliven Marketing Technologies Corporation  System and method of threedimensional image capture and modeling 
US7453456B2 (en)  20000328  20081118  Enliven Marketing Technologies Corporation  System and method of threedimensional image capture and modeling 
US20020050988A1 (en) *  20000328  20020502  Michael Petrov  System and method of threedimensional image capture and modeling 
US7193633B1 (en) *  20000427  20070320  Adobe Systems Incorporated  Method and apparatus for image assisted modeling of threedimensional scenes 
US20030179218A1 (en) *  20020322  20030925  Martins Fernando C. M.  Augmented reality system 
US7301547B2 (en) *  20020322  20071127  Intel Corporation  Augmented reality system 
US8577128B2 (en)  20020327  20131105  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8577127B2 (en)  20020327  20131105  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US20050089212A1 (en) *  20020327  20050428  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8724886B2 (en)  20020327  20140513  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8879824B2 (en)  20020327  20141104  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8559703B2 (en)  20020327  20131015  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US20110103680A1 (en) *  20020327  20110505  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US20110157174A1 (en) *  20020327  20110630  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8131064B2 (en)  20020327  20120306  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8254668B2 (en)  20020327  20120828  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8369607B2 (en) *  20020327  20130205  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8472702B2 (en)  20020327  20130625  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US8417024B2 (en)  20020327  20130409  Sanyo Electric Co., Ltd.  Method and apparatus for processing threedimensional images 
US20040105573A1 (en) *  20021015  20040603  Ulrich Neumann  Augmented virtual environments 
US7583275B2 (en) *  20021015  20090901  University Of Southern California  Modeling and video projection for augmented virtual environments 
US7064767B2 (en) *  20021119  20060620  Fujitsu Limited  Image solution processing method, processing apparatus, and program 
US20040155886A1 (en) *  20021119  20040812  Fujitsu Limited  Image simulation processing method, processing apparatus, and program 
US20060023228A1 (en) *  20040610  20060202  Geng Zheng J  Custom fit facial, nasal, and nostril masks 
US10133928B2 (en) *  20060502  20181120  Digitalglobe, Inc.  Advanced semiautomated vector editing in two and three dimensions 
US20160012276A1 (en) *  20060502  20160114  Digitalglobe, Inc.  Advanced semiautomated vector editing in two and three dimensions 
US9171425B2 (en)  20070713  20151027  INGENIO, Filiale de LotoQuébec Inc.  Gaming device with interactive spin action visual effects 
US9805503B2 (en) *  20110407  20171031  Ebay Inc.  Item model based on descriptor and images 
US20130257868A1 (en) *  20110407  20131003  Ebay Inc.  Item model based on descriptor and images 
US10157496B2 (en)  20110407  20181218  Ebay Inc.  Item model based on descriptor and images 
US8963916B2 (en)  20110826  20150224  Reincloud Corporation  Coherent presentation of multiple reality and interaction models 
US9274595B2 (en)  20110826  20160301  Reincloud Corporation  Coherent presentation of multiple reality and interaction models 
US20130235079A1 (en) *  20110826  20130912  Reincloud Corporation  Coherent presentation of multiple reality and interaction models 
US9792479B2 (en) *  20111129  20171017  Lucasfilm Entertainment Company Ltd.  Geometry tracking 
US20140147014A1 (en) *  20111129  20140529  Lucasfilm Entertainment Company Ltd.  Geometry tracking 
Also Published As
Publication number  Publication date 

JP3954211B2 (en)  20070808 
JP2000067267A (en)  20000303 
US6396491B2 (en)  20020528 
Similar Documents
Publication  Publication Date  Title 

EP1424655B1 (en)  A method of creating 3D facial models starting from facial images  
US5969722A (en)  Methods and apparatus for creation of threedimensional wire frames and for threedimensional stereo morphing  
US7106348B2 (en)  Texture information assignment method, object extraction method, threedimensional model generating method, and apparatus thereof  
US6636234B2 (en)  Image processing apparatus for interpolating and generating images from an arbitrary view point  
JP3512992B2 (en)  Image processing apparatus and image processing method  
EP0526881B1 (en)  Threedimensional model processing method, and apparatus therefor  
US8350850B2 (en)  Using photo collections for three dimensional modeling  
US5692117A (en)  Method and apparatus for producing animated drawings and inbetween drawings  
US7365744B2 (en)  Methods and systems for image modification  
US6518963B1 (en)  Method and apparatus for generating patches from a 3D mesh model  
US6016148A (en)  Automated mapping of facial images to animation wireframes topologies  
Alexander et al.  The digital emily project: Achieving a photorealistic digital actor  
KR20150104073A (en)  Methodology for 3d scene reconstruction from 2d image sequences  
DE69924700T2 (en)  Method for displaying graphic objects represented by surface elements  
JP2006249618A (en)  Virtual tryon device  
US20060061569A1 (en)  Pseudo 3D image creation device, pseudo 3D image creation method, and pseudo 3D image display system  
US5790713A (en)  Threedimensional computer graphics image generator  
US7027050B1 (en)  3D computer graphics processing apparatus and method  
EP1298596B1 (en)  Morphing method for structure shape  
DE69732663T2 (en)  Method for generating and changing 3d models and correlation of such models with 2d pictures  
US7453455B2 (en)  Imagebased rendering and editing method and apparatus  
US5864632A (en)  Map editing device for assisting updating of a threedimensional digital map  
US20110216090A1 (en)  Realtime interactive augmented reality system and method and recording medium storing program for implementing the method  
EP1154379A2 (en)  Object region data describing method and object region data creating apparatus  
JP3184327B2 (en)  3D graphics processing method and apparatus 
Legal Events
Date  Code  Title  Description 

AS  Assignment 
Owner name: FUJITSU LIMITED, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATANABE, MASAKI;SHIITANI, SHUICHI;OOTA, MASAAKI;AND OTHERS;REEL/FRAME:011555/0929 Effective date: 20010205 

FPAY  Fee payment 
Year of fee payment: 4 

FPAY  Fee payment 
Year of fee payment: 8 

REMI  Maintenance fee reminder mailed  
LAPS  Lapse for failure to pay maintenance fees  
STCH  Information on status: patent discontinuation 
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 

FP  Expired due to failure to pay maintenance fee 
Effective date: 20140528 