WO2004114224A1 - 仮想視点画像生成方法及び3次元画像表示方法並びに装置 - Google Patents
仮想視点画像生成方法及び3次元画像表示方法並びに装置 Download PDFInfo
- Publication number
- WO2004114224A1 WO2004114224A1 PCT/JP2004/008638 JP2004008638W WO2004114224A1 WO 2004114224 A1 WO2004114224 A1 WO 2004114224A1 JP 2004008638 W JP2004008638 W JP 2004008638W WO 2004114224 A1 WO2004114224 A1 WO 2004114224A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- projection
- subject
- point
- viewpoint
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/50—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels
Definitions
- the present invention relates to a technique for estimating information on a three-dimensional shape of an object from a plurality of images and generating an image using the information.
- the technology of the present invention can be applied to a system that supports visual communication such as a videophone.
- a three-dimensional image of a subject is displayed using a plurality of images of the subject taken under different conditions, or an image of the subject viewed from a virtual viewpoint (hereinafter, referred to as a virtual viewpoint image) is generated. There is a way to do that.
- a method of displaying a three-dimensional image of an object for example, there is a method of using a display having a plurality of image display surfaces, such as a DFD (Depth-Fused 3-D) display.
- the DFD is a display in which a plurality of image display surfaces are overlapped at a certain interval (for example, see Document 1: Japanese Patent No. 3022558).
- the DFDs are roughly classified into a luminance modulation type and a transmission type.
- the DFD When displaying an image of the object on the DFD, for example, a two-dimensional image of the object to be displayed is displayed on each image display surface.
- the DFD is a luminance modulation type
- the luminance of pixels overlapping each other on the display surface as viewed from a preset observer's viewpoint (reference viewpoint) is determined according to the shape of the object in the depth direction. And display it.
- the brightness of the pixels on the image display surface in front of the observer is increased, and at another point, the pixels on the display surface behind the observer are higher. Brightness increases.
- an observer who observes images displayed on the respective image display surfaces of the DFD can observe a three-dimensional image (three-dimensional image) of the object.
- the DFD is a transmissive type
- the transmissivity of pixels overlapping each other on the image display surface as viewed from a preset observer's viewpoint (reference viewpoint) is determined by the depth direction of the object. Is set and displayed in proportion to the shape of.
- a method of displaying a three-dimensional image of the object in addition to the display method using the DFD, for example, two images having parallax corresponding to the distance between the left and right eyes of the observer are displayed. There is also a method of displaying on a profile screen such as a liquid crystal display.
- the three-dimensional shape of the object is, for example, a computer's graphit status or the like. If it is generated and known, the respective images may be generated using the model. On the other hand, when the three-dimensional shape of the object is not known, the three-dimensional shape of the object, that is, a geometric model must be obtained before generating each of the images.
- the obtained geometric model of the subject is expressed, for example, as a set of basic figures called polygons and vota cells.
- Shape from X there are various methods for obtaining a geometric model of the subject based on the plurality of images, and many studies have been made as Shape from X in the field of computer vision.
- a typical model acquisition method in the above-mentioned Shape from X is a stereo method (1 column; see, for example, Reference 2: “Takeo Kanade et al .: Virtuahzed Reality: Onstructmg Virtual Worlds from Real
- a geometric model of the subject is obtained based on a plurality of images of the subject taken from different viewpoints.
- the distance from the reference viewpoint for acquiring the model to each point on the subject is determined by, for example, corresponding point matching, that is, matching of points (pixels) on each image, and the principle of triangulation. Ask for.
- a geometric model of the subject is not immediately obtained by the stereo method, but a point cloud on the surface of the subject is obtained. Therefore, the geometric model of the subject
- it is necessary to determine the structural information on how the points included in the point cloud are connected to each other and what kind of surface is formed for example, Ref. 3: "Katsushi Ikeuchi:" Modeling of Real Objects from Images ", Journal of the Robotics Society of Japan, Vol.16, No.6, pp.763-766, 1998).
- a device that generates an image performs complicated processing such as fitting of the shape of the subject or statistical processing. There must be. Therefore, high computer power is required.
- a method of obtaining a geometric model of the subject based on a plurality of images includes, as a typical model acquisition method similar to the stereo method, a method of acquiring each image taken from a plurality of viewpoints.
- Shape from Silhouette method There is a method called Shape from Silhouette (hereinafter referred to as Shape from Silhouette method) that determines an area occupied by the subject in space based on the contour of the subject (the column is described in Document 4: “Potmesil, M:” Generating Octree Models of 3D Objects from their Silhouettes in a Sequence of Images, "CVGIP 40, pp. 1-29, 1987").
- the geometric model of the subject obtained by the Shape from Silhouette method is often represented as a collection of small cubes called "bota cells".
- the geometric model of the subject is represented by the Vota cell, the amount of data required to represent the three-dimensional shape of the subject becomes enormous. Therefore, high computer power is required to obtain a geometric model of the subject using the Shape from Silhouette method.
- a projection surface of a multilayer structure is set, and
- a cut-out partial image (texture image) is attached to a projection plane corresponding to the distance of the object shown in the texture image to obtain a three-dimensional visual effect. Therefore, there is an advantage that the graphics hardware mounted on the popular personal computer can perform sufficiently high-speed processing and can easily handle data.
- the method for obtaining a geometric model of the subject by using the stereo method is based on the shape and surface of the subject. It is not always possible to obtain reliable distance information at any point on the object, regardless of the shape (texture), the shape of the object immediately affected by the environment around the object, and ( For example, see Reference 7: "Masatoshi Okutomi:” Why is stereo difficult? ", Journal of the Robotics Society of Japan, Vol.16, No.6, pp.39-43, 1998.)
- the Shape from Silhouette method is a method of obtaining a geometric model of the subject in principle, assuming that the subject has a convex shape. For this reason, if the object is entirely or partially concave, there is a problem that a correct model of the object cannot be obtained.
- the Shape from Silhouette method is still a major research in the field of computer vision, and it is difficult to accurately extract the background on an image and the outline of the subject. It has become a challenge.
- the geometric model of the subject obtained by the Shape from Silhouette method is a model obtained from an incorrect contour, and its reliability is not sufficiently high. Therefore, an image generated from the geometric model of the subject obtained by the Shape from Silhouette method has a problem that the image quality is not sufficiently satisfactory.
- the depth value given to each texture pixel is known, that is, the shape of the subject is accurate. It is assumed that what is required. Therefore, if the shape of the object is not known, a geometric model of the subject must first be obtained. As a result, if there is a portion with low reliability when estimating the shape of the subject, the texture image may be pasted on an incorrect projection plane, and the generated image may be significantly deteriorated. There is a problem.
- the process of pasting an image on the projection surface of the multilayer structure is fast, but the process of obtaining the depth value is difficult.
- high processing capability is required.
- An object of the present invention is to obtain a three-dimensional shape of a subject from a plurality of images to generate a subject image. It is an object of the present invention to provide a technology capable of reducing the deterioration of the device.
- Another object of the present invention is to obtain a three-dimensional shape of a subject from a plurality of images and generate an image of the subject, even if the processing performance is low, even if the apparatus has a low image quality. It is an object of the present invention to provide a technique that can generate an image in a short time with little deterioration.
- the present invention provides a step of acquiring a plurality of images of a subject taken by a plurality of cameras and determining a virtual viewpoint which is a position at which the subject is viewed. Generating a virtual viewpoint image, which is an image obtained when the subject is viewed, based on the acquired image of the subject, the virtual viewpoint image generation method.
- the step of generating an image includes: a step 1 of setting a projection plane having a multilayer structure; a step 2 of finding a corresponding point on the image of each subject corresponding to each projection point on the projection plane; Determining the color information or the luminance information of the projection point based on the color information or the luminance information of the corresponding point, and the plurality of projection points overlapping from a certain reference viewpoint in space. Equivalent to the position of the projection point Before that distance Calculating the degree of possibility that the subject exists, based on the degree of correlation between the corresponding point or the vicinity thereof, and color information or luminance information of a reference point overlapping from the virtual viewpoint with the object.
- Step 5 of determining the color information or luminance information of each pixel in the virtual viewpoint image by performing a mixing process in accordance with the degree of the possibility that the image exists, and for all points corresponding to the pixels of the virtual viewpoint image. And a step 6 of repeatedly performing the steps 1 to 5 described above.
- the image generation method of the present invention includes a step of acquiring an image of the subject from a plurality of different viewpoints, and a step of acquiring a three-dimensional shape of the subject from the plurality of images. Generating an image of the subject as viewed from the observer's viewpoint based on the three-dimensional shape of the subject, wherein the step of acquiring the three-dimensional shape of the subject is a virtual Setting a projection plane of a multilayer structure on a suitable three-dimensional space, determining a reference viewpoint for acquiring the three-dimensional shape of the subject, and corresponding to a projection point which is a point on the projection plane.
- the step of calculating the degree includes preparing a plurality of camera sets, which are combinations of several viewpoints selected from the plurality of viewpoints, and calculating a degree of correlation from corresponding points on an image included in each of the camera sets.
- Determining the presence probability wherein the step of determining the presence probability includes the step of calculating a presence probability based on the degree of correlation of each projection point obtained for each of the camera sets, and the step of determining each of the camera sets. Determining the existence probability of each projection point by performing the integration process of the existence probabilities.
- the present invention provides a step of acquiring a plurality of images obtained by photographing a subject while changing the focusing distance, and setting a virtual viewpoint which is a viewpoint for viewing the subject shown in the plurality of images: The 3D shape of the subject from the plurality of images. Generating an image of the subject viewed from the virtual viewpoint based on the acquired three-dimensional shape of the subject, wherein the three-dimensional shape of the subject is acquired.
- a step of determining color information or luminance information of the projection point from color information or luminance information of the corresponding point on each of the acquired images corresponding to a certain projection point, and a focusing point of the corresponding point corresponding to the projection point Determining the degree of focus of the projection points from the degrees of projection, and for a plurality of projection points overlapping when viewed from the reference viewpoint, corresponding to the position of each projection point based on the degree of focus of each projection point.
- Distance to the subject Determining the existence probability, which is the probability that the surface of the object exists, wherein the step of generating the image of the subject as viewed from the virtual viewpoint includes color information or color information of projection points overlapping when viewed from the virtual viewpoint. It can be configured as an image generation method characterized in that luminance information is mixed at a ratio according to the existence probability to determine color information or luminance information of each point on an image to be generated.
- the present invention includes a step of acquiring a plurality of images obtained by photographing a subject under different conditions; a step of acquiring a three-dimensional shape of the subject from the plurality of images; and a step of acquiring the three-dimensional shape of the subject. Generating an image of the subject viewed from the viewpoint of the observer based on the three-dimensional shape of the subject, wherein the step of obtaining the three-dimensional shape of the subject is performed in a virtual three-dimensional space.
- Determining the existence probability that is the probability that the surface of the object is present on each of the projection points with respect to the plurality of projection points that overlap when viewed from the reference viewpoint, and that determines the existence probability Calculating the evaluation reference value of each projection point from the image information of the corresponding point; performing statistical processing of the evaluation reference value of each projection point; and evaluating the evaluation reference value obtained by performing the statistical processing. Calculating the existence probability of each projection point based on the value.
- the present invention provides a step of obtaining a plurality of images obtained by photographing a subject under different conditions; a step of obtaining a three-dimensional shape of the subject from the plurality of images; Setting a viewpoint position at which the observer views a plurality of image display surfaces at different depth positions, and generating a two-dimensional image to be displayed on each of the image display surfaces based on the acquired three-dimensional shape of the subject And displaying the generated two-dimensional image on each of the display surfaces to present a three-dimensional image of the subject, wherein the three-dimensional shape of the subject is acquired.
- the step of presenting the three-dimensional image may be configured as a three-dimensional image display method characterized by displaying the color information or the luminance information of each of the display points at a luminance according to the existence probability.
- an image of a subject when an image of a subject is generated by acquiring a three-dimensional shape of the subject from a plurality of images, a remarkable image quality occurs in a portion where the reliability of the estimation of the shape of the subject is low. Can be reduced. Further, even with a device having low processing performance, it is possible to generate an image in a short time with little deterioration of the image quality. Further, it is possible to reduce the size of a photographing device for photographing an image used for obtaining a geometric model of a subject, and to simplify the device configuration.
- FIG. 1 is a diagram for explaining a problem of a conventional virtual viewpoint image.
- FIG. 2 is a schematic diagram for explaining the principle of a virtual viewpoint image generation method according to the first embodiment.
- FIG. 3 is a diagram illustrating an example of a projection plane group, a camera, a reference viewpoint, a projection point, and a corresponding point.
- FIG. 3 is a schematic diagram for explaining the principle of the virtual viewpoint image generation method according to the first embodiment.
- FIG. 4 is a schematic diagram for explaining the principle of the virtual viewpoint image generation method according to the first embodiment, and is a diagram illustrating an example of a mixing process according to the transparency of a projection point.
- FIG. 5 is a schematic diagram for explaining the principle of the virtual viewpoint image generation method according to the first embodiment, and is a diagram showing an example of a subject, a projection plane group, a reference viewpoint, a virtual viewpoint, and a projection point. You.
- FIG. 6 is a schematic diagram showing a schematic configuration of a virtual viewpoint image generation device of Example 1-1, and is a block diagram showing a configuration inside the image generation device.
- FIG. 7 is a schematic diagram illustrating a schematic configuration of a virtual viewpoint image generation device according to Example 1-1, and is a diagram illustrating a configuration example of a system using the image generation device.
- FIG. 8 is a schematic diagram for explaining a mathematical model of a virtual viewpoint image generation method using the virtual viewpoint image generation device of Embodiment 1-1, and is a diagram illustrating an example of projection conversion.
- FIG. 9 is a schematic diagram for explaining a mathematical model of a virtual viewpoint image generation method using the virtual viewpoint image generation device of Embodiment 1-1, and is a diagram illustrating an example of coordinate transformation.
- FIG. 10 is a schematic diagram for explaining a virtual-viewpoint image generation processing procedure according to the embodiment 1-1.
- FIG. 11 is a schematic diagram for explaining a virtual viewpoint image generation processing procedure according to Embodiment 11 and is a specific flowchart of steps for generating a virtual viewpoint image.
- FIG. 12 is a schematic diagram for explaining a generation processing procedure of a virtual viewpoint image according to the embodiment 11 and is a diagram illustrating an example of a method of setting a projection plane.
- FIG. 13 is a schematic diagram for explaining the virtual viewpoint image generation processing procedure of Embodiment 11 and is a diagram showing an example of a projection point, a projection point sequence, and a set of projection point sequences.
- FIG. 14 is a schematic diagram for explaining a virtual viewpoint image generation processing procedure according to the embodiment 11; and FIG. 14 is a diagram illustrating an angle formed by a reference viewpoint, a projection point, and a camera position for explaining color information mixing processing. It is a figure showing an example.
- FIG. 15 is a schematic diagram for explaining a generation processing procedure of a virtual viewpoint image according to the embodiment 1-1.
- FIG. 9 is a diagram showing an example of a corresponding point matching process.
- FIG. 16 is a schematic diagram for explaining a virtual viewpoint image generation processing procedure according to the embodiment 11; it is a diagram for explaining a rendering process.
- FIG. 17 is a schematic diagram for explaining a virtual viewpoint image generation processing procedure in Example 11 and is a diagram illustrating an example of a generated virtual viewpoint image.
- FIG. 18 is a schematic diagram showing an application example of a system to which the virtual viewpoint image generation device according to the embodiment 11 is applied.
- FIG. 19 (a) is a flowchart showing a process which is a feature of the embodiment 12 and FIG. 19 (b) is a flowchart showing an example of a specific processing procedure of a step of determining transparency information.
- FIG. 20 is a schematic diagram for explaining a virtual viewpoint image generation method according to Embodiment 13 and is a diagram showing an example of a projection plane group, a reference viewpoint, a virtual viewpoint, and a projection point.
- FIG. 21 is a schematic diagram for explaining the principle of the image generation method according to the second embodiment, and is a diagram for explaining the concept of the generation method.
- FIG. 22 is a schematic diagram for explaining the principle of the image generation method according to the second embodiment.
- FIG. 2 is a diagram two-dimensionally expressing 21.
- FIG. 23 is a diagram for explaining how to determine the degree of correlation of corresponding points.
- FIG. 24 is a view for explaining points that are problematic when calculating the degree of correlation of corresponding points.
- FIG. 25 is a schematic diagram for explaining the principle of the image generation method according to the second embodiment, and is a diagram for explaining a method for solving the problem when obtaining the correlation.
- FIG. 26 is a diagram illustrating an example of a method for improving the accuracy of the existence probability.
- FIG. 27 is a schematic diagram for explaining the principle of the image generation method according to the second embodiment.
- FIG. 28 is a schematic diagram for explaining the principle of the image generation method according to the second embodiment.
- FIG. 29 is a schematic diagram for explaining the image generation method of the embodiment 2-1 and is a flowchart showing an example of an overall processing procedure.
- FIG. 30 is a schematic diagram for explaining the image generation method of the embodiment 2_1, and is a flowchart showing an example of a processing procedure of a step of determining color information and an existence probability of a projection point in FIG. 29.
- FIG. 31 is a schematic diagram for explaining the image generation method of the embodiment 2_1. It is a flowchart which shows an example of the step which determines an existence probability.
- FIG. 32 is a schematic diagram for explaining the image generation method of the embodiment 2-1 and is a diagram showing a setting example of a camera set.
- FIG. 33 is a schematic diagram for explaining the image generation method of the embodiment 2-1 and is a diagram for explaining a method of converting information on a projection surface into information on a display surface.
- FIG. 34 is a diagram for explaining a method of converting information on the projection surface into information on the display surface.
- FIG. 35 is a block diagram showing a configuration example of an image generation device to which the image generation method of Embodiment 2-1 is applied.
- FIG. 36 is a diagram showing a configuration example of an image display system using an image generation device to which the image generation method of Embodiment 2-1 is applied.
- FIG. 37 is a diagram showing another example of the configuration of an image display system using an image generation device to which the image generation method of Embodiment 2-1 is applied.
- FIG. 38 is a schematic diagram for explaining the image generation method of the embodiment 2-2, and is a flowchart showing an example of the entire processing procedure.
- FIG. 39 is a schematic diagram for explaining the image generation method of the embodiment 2-2, and is a diagram for explaining the principle of rendering.
- FIG. 40 is a schematic diagram for explaining the image generation method of the embodiment 2-2, and is a diagram for explaining a problem in the image generation method of the embodiment 2;
- FIG. 41 is a schematic diagram for explaining the image generation method of the embodiment 2-2, and is a diagram for explaining a method of solving a problem in the image generation method of the embodiment 2;
- FIG. 42 is a schematic diagram for explaining the image generation method of the embodiment 2-2, and is a flowchart showing an example of a processing procedure for converting the existence probability into transparency.
- FIG. 43 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram illustrating a setting example of a projection plane and a reference viewpoint.
- FIG. 44 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram illustrating a setting example of a projection plane and a reference viewpoint.
- FIG. 45 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method for determining color information of a projection point and a degree of focus.
- FIG. 46 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method of determining the existence probability of a projection point.
- FIG. 47 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method for determining the existence probability of a projection point.
- FIG. 48 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method for determining the existence probability of a projection point.
- FIG. 49 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method of generating an image viewed from a virtual viewpoint.
- FIG. 50 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a problem in the image generation method of the present invention.
- FIG. 51 is a schematic diagram for explaining the principle of the image generation method according to the third embodiment, and is a diagram for explaining a method for solving a problem in the image generation method of the present invention.
- FIG. 52 is a schematic diagram for explaining a mathematical model of the image generation method according to the third embodiment, and is a diagram showing a relationship among projection points, corresponding points, and points on an image to be generated.
- FIG. 53 is a schematic diagram for explaining a mathematical model of an image generating method according to the third embodiment, and is a diagram for explaining a method of converting points in space and pixels on an image.
- FIG. 54 is a schematic diagram for explaining the image generating method of the embodiment 3-1 and is a flowchart showing a procedure for generating an image.
- FIG. 55 is a schematic diagram for explaining the image generation method of the embodiment 3-1 and is a diagram for explaining a method of setting a projection point sequence.
- FIG. 56 is a schematic diagram for explaining the image generation method of the embodiment 3-1 and is a flowchart showing a specific example of the process of step 10305 in FIG. 54.
- FIG. 57 is a schematic diagram for explaining the image generation method of the embodiment 3_1 and is a diagram for explaining a rendering method.
- FIG. 58 is a schematic diagram showing a schematic configuration of an apparatus for generating an image by the image generation method of Embodiment 3-1 and is a block diagram showing a configuration of the apparatus.
- FIG. 59 is a view for explaining a configuration example of a subject image photographing means in Embodiment 3-1.
- FIG. 60 is a view for explaining a configuration example of a subject image photographing means in Embodiment 3-1.
- FIG. 61 is a diagram for explaining a configuration example of a subject image photographing means according to Embodiment 3-1.
- FIG. 62 is a schematic diagram showing a schematic configuration of an image generation system using the image generation device of Embodiment 3-1 and is a diagram showing a configuration example of the image generation system.
- FIG. 63 is a schematic diagram showing a schematic configuration of an image generation system using the image generation device of Embodiment 3-1 and is a diagram showing another configuration example of the image generation system.
- FIG. 64 is a flowchart showing a process of a virtual viewpoint image generation method according to Embodiment 3-2.
- FIG. 65 is a schematic diagram for explaining another generation method in the image generation method according to the third embodiment.
- FIG. 66 is a schematic diagram for explaining the image generation method of Example 411 according to the third embodiment, and is a flowchart showing an example of the overall processing procedure.
- FIG. 67 is a schematic diagram for explaining the image generating method according to the example 411, and is a diagram showing an example of a method of setting a projection plane.
- FIG. 68 is a schematic diagram for explaining the image generation method according to the example 411, and is a diagram showing an example of a method of setting a projection plane.
- FIG. 69 is a schematic diagram for explaining the image generation method according to the example 411, and is a diagram for explaining a method of setting a projection point sequence.
- FIG. 70 is a schematic diagram for explaining the image generation method according to the embodiment 4-11, and is a flowchart showing an example of a processing procedure of a step of determining color information and an existence probability of a projection point.
- FIG. 71 is a schematic diagram for explaining the image generation method according to the example 4-11, and is a diagram for explaining a method of determining the existence probability.
- FIG. 72 is a schematic diagram for explaining the image generation method according to Example 411, and is a diagram for explaining a method for determining the existence probability.
- FIG. 73 is a schematic diagram for explaining the image generation method according to Example 411, and is a diagram for explaining a method for determining the existence probability.
- FIG. 74 is a schematic diagram for explaining the image generation method of the embodiment 4-1 and is a diagram for explaining a method of determining the existence probability.
- FIG. 75 is a schematic diagram for explaining an image generation method according to Example 4-1 and is a diagram for explaining a method for generating a two-dimensional image to be displayed on each image display surface.
- FIG. 76 is a schematic diagram for explaining the image generation method of Embodiment 4-1 and is a diagram for explaining a method of generating a two-dimensional image to be displayed on each image display surface.
- FIG. 77 is a schematic diagram for explaining the image generation method of Embodiment 4-1 and is a diagram for explaining a method of generating a two-dimensional image to be displayed on each image display surface.
- FIG. 78 is a schematic diagram for explaining the image generation method according to the example 412, and is a diagram showing the relationship between projection points and corresponding points.
- FIG. 79 is a schematic diagram for explaining the image generation method according to the example 412, and is a flowchart showing an example of steps for determining color information and existence probability of a projection point.
- FIG. 80 is a schematic diagram for explaining the image generation method according to the embodiment 412, and is a diagram for explaining how to determine the existence probability.
- FIG. 81 is a schematic diagram for explaining the image generation method according to the example 412, and is a diagram for explaining how to determine the existence probability.
- FIG. 82 is a schematic diagram for explaining the arbitrary viewpoint image generation method of the embodiment 4-13, and is a flowchart showing an example of the overall processing procedure.
- FIG. 83 is a schematic diagram for explaining the arbitrary viewpoint image generating method of the embodiment 4-13, and is a diagram for explaining the principle of rendering.
- FIG. 84 is a flowchart showing an example of a processing procedure for converting the existence probability to transparency in the embodiment 4-13.
- FIG. 85 is a schematic diagram showing a schematic configuration of an image generation device according to Example 4-4.
- FIG. 86 is a schematic diagram showing a schematic configuration of the image generation device of Embodiment 4-4.
- FIG. 87 is a schematic diagram showing a schematic configuration of an image generation device of Embodiment 4-4, and is a diagram showing a configuration example of an image display system using the image generation device.
- FIG. 88 is a schematic diagram showing a schematic configuration of an image generation device of Embodiment 4-4, and is a diagram showing a configuration example of an image display system using the image generation device.
- FIG. 89 is a schematic diagram showing a schematic configuration of an image generation device of Embodiment 4-4, and is a diagram showing a configuration example of an image display system using the image generation device.
- FIG. 90 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 according to the fifth embodiment, and is a flowchart showing an example of the overall processing procedure.
- FIG. 91 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram showing an example of a method of setting a projection plane.
- FIG. 92 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram showing an example of a method of setting a projection plane.
- FIG. 93 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of setting projection points.
- FIG. 94 is a schematic diagram for explaining the three-dimensional image display method of the embodiment 5-1 and is a flowchart showing an example of a processing procedure of a step of determining color information and existence probability of a projection point.
- FIG. 95 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of determining the existence probability.
- FIG. 96 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of determining the existence probability.
- FIG. 97 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of determining the existence probability.
- FIG. 98 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of generating a two-dimensional image to be displayed on each image display surface.
- FIG. 99 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of generating a two-dimensional image to be displayed on each image display surface.
- FIG. 100 is a schematic diagram for explaining the three-dimensional image display method of Example 5-1 and is a diagram for explaining a method of generating a two-dimensional image to be displayed on each image display surface.
- FIG. 101 is a schematic diagram for explaining the three-dimensional image display method of Embodiment 5-2, and is a diagram showing the relationship between projection points and corresponding points.
- FIG. 102 is a schematic diagram for explaining the three-dimensional image display method of Embodiment 5-2, and is a flowchart showing an example of steps for determining color information and existence probability of a projection point.
- FIG. 103 is a schematic diagram for explaining the three-dimensional image display method of Example 5-2, and is a diagram for explaining how to determine the existence probability.
- FIG. 104 is a schematic diagram for explaining the three-dimensional image display method of Example 5-2, and is a diagram for explaining how to determine the existence probability. Explanation of reference numerals
- 1, 1A, 1B, 1C virtual viewpoint image generation device
- 101 virtual viewpoint determination unit
- 102 subject image acquisition unit
- 103 image generation unit
- 103a projection plane determination unit
- 103b reference viewpoint determination unit
- 103c Texture array securing means
- 103d Corresponding point matching processing means
- 103e Color information determining means
- 103f Existence probability information determining means
- 103g Rendering means
- 104 Generated image output means
- 3 subject photographing means (camera)
- 4 image display means
- 6 virtual viewpoint image
- 7 subject image
- 7A degraded part of the image
- 7B part of which the image is missing.
- 6, 6A, 6B, 6C image generation device
- 601 subject image acquisition means
- 602 observer viewpoint setting means
- 603 projection plane setting means
- 604 projection plane information storage area securing means
- 605 color information / Existence probability determination means
- 606 Projection plane information-display plane information conversion means
- 607 ... Image output means 7, 7A, 7B ... Image display means, 8, 8A, 8B ...
- Subject image photographing means 9, 9A, 9B: Reference viewpoint input means
- 2, 2A, 2B, 2C image generation device
- 201 subject image acquisition unit
- 202 observer viewpoint setting unit
- 203 projection plane setting unit
- 204 texture array securing unit
- 205 color information / existence probability Determination means
- 206 projection plane information-display plane information conversion means
- 207 image output means
- 208 rendering means, 3, 3A, 3B ... image display means, 4, 4A, 4B ... Photographing means, 5, 5A, 5B ... Reference viewpoint input means
- the first embodiment is an embodiment mainly corresponding to claim 1 to claim 11.
- the present embodiment an example is shown in which three primary colors of red (R), green (G), and blue (B) are used to represent color information, but luminance (Y) or color difference (U, V) is used.
- luminance Y
- U color difference
- FIGS. 2 to 5 are schematic diagrams for explaining the principle of the virtual viewpoint image generation method according to the present invention.
- FIG. 2 shows an example of a projection plane group, a camera, a reference viewpoint, a projection point, and a corresponding point.
- FIGS. 3 (a) and 3 (b) are diagrams showing an example of a graph of the degree of correlation between corresponding points
- FIG. 4 (a) is a diagram showing an example of a mixing process according to the transparency of projection points.
- Fig. 4 (b) is a diagram expressing the mixing process of color information according to transparency in a color space
- Fig. 5 is a diagram showing an example of a subject, a projection plane group, a reference viewpoint, a virtual viewpoint, and a projection point.
- the virtual viewpoint image generation method includes a step 1 of setting a projection plane group having a multilayer structure, and a step of setting a plurality of cameras corresponding to each point (projection point) on the projection plane.
- Step 4 of calculating the degree of sex (existence probability information) based on the degree of correlation of the corresponding point or its neighboring area, and mixing processing of the color information of the overlapping reference points from the virtual viewpoint according to the existence probability information Step 5 to determine the color information of each pixel at the virtual viewpoint, and Step 6 to repeat Steps 1 to 5 for all points corresponding to the pixels of the virtual viewpoint image.
- an estimated value having sufficient reliability in distance estimation may be used depending on the shooting conditions and parts of the subject.
- the reliability is low, the position where the estimated value is obtained is low, and the contribution to image generation is reduced by drawing vaguely, and extreme image degradation is caused.
- areas where high-reliability distance data are obtained are clearly drawn to increase the contribution to image generation.
- the possibility of the existence of a subject is calculated from the degree of correlation, and a plurality of projection points are drawn with clarity according to the existence probability information, so that the reliability of estimation can be improved.
- a plurality of projection points are drawn vaguely, the noise of the generated image is not conspicuous, and an effect of generating a better image as seen by the observer is obtained.
- the drawing method of the present invention can be implemented in a symphony by using texture mapping, which is a basic method of computer graphics, and can be satisfactorily processed by 3D graphics hardware mounted on a popular personal computer. This has the effect of reducing the computer load as much as possible.
- each reference point on the projection plane has transparency having a gradation from transmission to non-transmission, and the transparency at each reference point is obtained in step 4 above.
- the mixing process for obtaining the color information of each point at the virtual viewpoint in step 5 is sequentially processed from the projection point far from the virtual viewpoint to the projection point close to the virtual viewpoint.
- the color information obtained by the mixing process up to a certain projection point is obtained by internally dividing the color information at the projection point and the color information obtained by the mixing process up to the previous projection point at a ratio according to the transparency.
- the color information obtained by the mixing process is an internal part of the color information at a certain stage and the next color information.
- color information D is represented by K and D
- Equation 5 The guarantee as in Equation 5 is proved by mathematical induction, but a detailed description is omitted.
- the color information of the virtual viewpoint can always be stored in an appropriate color space V.
- Equation 7 is obtained.
- the color information K at the point A on the image plane of the virtual viewpoint P is the projection point on the straight line PA.
- the color information is calculated by adding the color information with the weight according to the existence possibility information, and is represented by the following Expression 8.
- ⁇ ⁇ ⁇ [ ⁇ [+ ⁇ 2 ⁇ 2
- K ⁇ , but in Equation 9, the luminance of each component of (R, G, ⁇ ) is
- K is an effective color space.
- hi, hi, hi ', and hi' are ⁇ and ⁇ , respectively.
- a mixing process is sequentially performed from a projection point far from the virtual viewpoint to a projection point close to the virtual viewpoint, and is obtained by a mixing process up to a certain projection point. If the color information to be obtained is obtained by internally dividing the color information at the projection point and the color information obtained by the mixing process up to the previous projection point at a ratio according to the transparency, ⁇ is given by the following formula: 12
- the expression (12) becomes the following expression (13) from the expressions (6), (7), (10), and (11), and is a good approximation of the original color information.
- the range of color information that is effective when the color information is calculated by an operation using the following mathematical expression For example, a correction process is not required, but such a correction is not necessary for image generation that converts to transparency.
- a unique projection plane group is set for each camera in step 1, and in step 3, the color information of the projection point is set to a camera unique to the projection plane to which the projection point belongs.
- the existence probability information in step 4 is calculated using the color information of the corresponding point of the image captured by the camera based on the viewpoint of the camera unique to the projection plane to which the projection point belongs, and the color information of the virtual viewpoint in step 5 is mixed.
- the processing is corrected based on the positional relationship between the virtual viewpoint and each reference viewpoint. In this way, a unique projection plane group is set for each camera irrespective of the positional relationship between the cameras. Therefore, even if the camera arrangement is complicated or irregular, the projection plane group setting processing is performed.
- the image can be generated by a consistent processing method without affecting the image quality.
- a program for executing the virtual viewpoint image generation method according to the first embodiment of the present invention on a dedicated device, a popular personal computer, or the like has a wide application range and high versatility.
- FIG. 6 and 7 are schematic diagrams showing a schematic configuration of a virtual viewpoint image generation device according to Embodiment 1-1 of the present invention.
- FIG. 6 is a block diagram showing a configuration inside the image generation device
- FIG. FIG. 1 is a diagram illustrating a configuration example of a system using a device.
- 1 is a virtual viewpoint image generation device
- 101 is a virtual viewpoint determination unit
- 102 is a subject image acquisition unit
- 103 is an image generation unit
- 103a is a projection plane determination unit
- 103b is a reference viewpoint determination unit.
- 103c is a texture arrangement securing means
- 103d is a corresponding point matching processing means
- 103e is color information determining means
- 103f is existence probability information determining means
- 103g is rendering means
- 104 is generated image output means
- 2 is viewpoint position input means.
- Reference numeral 3 denotes subject photographing means
- reference numeral 4 denotes image display means.
- User is a user of the virtual viewpoint image generation device
- Obj is a subject.
- the virtual viewpoint image generating apparatus 1 of the embodiment 1-1 uses the viewpoint (virtual viewpoint) parameters input by the user User using the viewpoint position input unit 2.
- the virtual viewpoint determining means 101 determines, for example, a position, a direction, and an angle of view as parameters of the virtual viewpoint.
- the viewpoint position input means 2 is, for example, as shown in FIG. Or a device such as a mouse, which is operated and selected by the user User, or a device such as a keyboard, which the user User directly inputs as a numerical value.
- a detection sensor may be used. Also, it can be provided by another program or provided via a network.
- the subject image acquiring means 102 may sequentially acquire the position and orientation of the subject, which changes every moment, at a fixed interval, for example, at an interval of 30 Hz, or the subject image may be stationary at an arbitrary time. It is possible to acquire an image, or to acquire a subject image which has been taken in advance by reading it from a recording device. It is desirable that the subject images from multiple viewpoint positions be captured at the same time by synchronizing all cameras. This is not the case if it can be considered.
- the image generation means 103 includes a projection plane determination means 103a for determining the position and shape of the projection plane used for image generation, and a reference viewpoint for determining the position of the reference viewpoint.
- Determining means 103b texture array securing means 103c for allocating an array of texture images to be pasted on the projection plane on a memory, and a plurality of viewpoint positions in the image of the subject acquired by the subject image acquiring means 102.
- Corresponding point matching processing means 103d for associating locations where the same area of the subject is photographed, and the image information of a plurality of subjects obtained by acquiring the color information of the texture array secured by the texture array securing means 103c.
- the subject is present on the projection plane, out of the color information determining means 103e which is determined by mixing the color information of the two and the texture array secured by the texture array securing means 103c.
- Probability information determining means 103f for determining the degree of possibility (existence probability information) based on the result of the corresponding point matching processing means 103d; the color information determined by the color information determining means 103e;
- a rendering means 103g for rendering the projection plane viewed from the virtual viewpoint based on the existence probability information determined by the information determining means 103f.
- the arrangement secured by the texture arrangement securing means 103c holds color information and existence probability information for each pixel, and includes, for example, red (R), green (G), and blue (B).
- the primary color and the existence probability information are expressed in 8 bits each.
- the present invention is not limited to this particular data representation. It is not format dependent.
- the image display means 4 is, for example, a CRT (Cathode Ray Tube), an LCD (Liquid Crystal Display), a PDP (Plasma Display Panel) or the like connected to the generated image output means 104 such as a display terminal.
- the image display means 4 may be, for example, a two-dimensional flat display device or a curved display device surrounding the user User. If a display device capable of stereoscopic display is used as the image display means 4, the virtual viewpoint determination means 101 determines two virtual viewpoints corresponding to the left and right eyes of the user User, and generates the image. After the virtual viewpoint images from the two virtual viewpoints are generated by the means 103, independent images can be presented to the left and right eyes of the user. In addition, if a 3D display that can generate images from three or more virtual viewpoints and display images with three or more parallaxes is used, stereoscopic images can be displayed to one or more users. Can also be presented.
- a system using the virtual viewpoint image generation device 1 has, for example, a configuration as shown in FIG. 7, and generates a virtual viewpoint image via a user Use or the viewpoint position input means 2.
- the virtual viewpoint image generation apparatus 1 photographs the subject Obj with the subject photographing means (camera) 3, acquires the image, and then acquires the acquired image.
- An image (virtual viewpoint image) at the designated viewpoint is generated based on the image of the subject.
- the generated virtual viewpoint image is presented to the user User by the image display means 4.
- Fig. 7 shows an example of the implementation of the image generation device according to the present invention, and the claims of the present invention are not necessarily limited to such a configuration.
- the arrangement, form, and implementation are arbitrary without departing from the spirit of the present invention.
- FIGS. 8 and 9 are schematic diagrams for explaining a mathematical model of a virtual viewpoint image generation method using the virtual viewpoint image generation device of Embodiment 1-1, and FIG. 8 is an example of projection transformation.
- FIG. 9 is a diagram showing an example of coordinate conversion.
- the center position of the camera also indicates the camera itself, and similarly, P indicates the virtual viewpoint itself and the center position of the virtual viewpoint.
- the force that the cameras C are arranged in a horizontal line is not limited to such an arrangement.
- the present invention is not limited to such an arrangement, but may be applied to various arrangements such as a two-dimensional lattice or an arc. Applicable.
- the arrangement of the projection plane L is not necessarily limited to parallel,
- Embodiment 1-1 it is assumed that the projection plane L is a plane.
- an image of virtual viewpoint P where the camera is not located is generated based on the image of subject ⁇ bj obtained at position C where the camera is actually located. Basically, a part of the image of the subject
- the virtual viewpoint P and the camera C project points in the three-dimensional space onto two-dimensional points on the respective image planes.
- Equation 15 a matrix projected from a point (X, ⁇ , Z) on a three-dimensional space to a point (X, y) on an image plane is given by a matrix of 3 rows and 4 columns. Equation 15 can be expressed.
- a matrix ⁇ representing a perspective projection transformation of a focal length f around the origin is
- the image handled by the computer is a so-called digital image, which is represented by a two-dimensional array in the memory.
- the coordinate system (U, V) indicating the position of this array is called the digital image coordinate system.
- one point on the digital image having a size of 640 pixels X 480 pixels is a variable u that takes an integer value of 039, a variable u that takes a shift, and an integer value of 0 to 479.
- the color information at that point is represented by data obtained by quantizing the red (R), green (G), and blue ( ⁇ ) information at that address with 8 bits or the like. Is done.
- the image coordinates (X, y) as shown in FIG. 9 (a) and the digital image coordinates (u, V) as shown in FIG. 9 (b) have a one-to-one correspondence. For example, they have a relationship as shown in Expression 17 below.
- V is a force that takes discrete values. In the following description, continuous values are assumed unless otherwise specified, and appropriate discretization processing is performed when accessing an array.
- FIGS. 10 to 17 are schematic diagrams for explaining the virtual viewpoint image generation processing procedure of the present embodiment 11;
- FIG. 10 is a flowchart of the entire generation processing, and
- FIG. 11 is a virtual viewpoint image.
- FIG. 12 is a diagram showing an example of a projection plane setting method
- FIG. 13 is a diagram showing an example of a projection point, a sequence of projection points, and a set of projection point sequences
- FIG. 15 is a diagram illustrating an example of an angle formed by a reference viewpoint, a projection point, and a camera position for explaining the color information mixing process.
- FIG. 15 is a diagram illustrating an example of the corresponding point matching process.
- FIG. 16 is a diagram illustrating the rendering process.
- FIG. 17 is a diagram showing an example of the generated virtual viewpoint image.
- the virtual viewpoint determination is performed based on a request from a user User.
- the parameters of the virtual viewpoint P are determined by means (step 501).
- step 501 for example, the position, direction, angle of view, and the like of the virtual viewpoint P are determined.
- an image of the subject Obj taken by the plurality of cameras 3 (C) is acquired by the subject image acquiring means 102 (Step 502).
- an image (virtual viewpoint image) when the subject Obj is viewed from the virtual viewpoint P is generated (Step 503). .
- step 503 for example, the processing of each step as shown in FIG. 11 is performed to generate a virtual viewpoint image.
- step 503 first, the position of the projection plane L (jejJ ⁇ ⁇ l, 2 ! ⁇ , M ⁇ ) of the multilayer structure used for generating the virtual viewpoint image is determined by the projection plane determination unit 103a. Then, the shape is determined (step 503a).
- the projection plane L in the step 503a for example, projection planes having a planar shape as shown in FIG. 8 are installed in parallel at equal intervals.
- a plane projection plane
- a plane projection plane
- may be arranged in the sequence 1 (d l, 2,3, ').
- the example of setting the projection plane L is merely an example, and the image generation method of the present invention basically requires setting two or more different projection planes. It is not limited to the setting method.
- the reference viewpoint determining means 103b next uses the degree of possibility that the subject exists on the projection point (existence probability information) used in the subsequent processing.
- a point (reference viewpoint) R to be a reference when calculating is determined (step 503b).
- the position of the reference viewpoint R may be the same as the position of the virtual viewpoint P, or if there are a plurality of virtual viewpoints, the position of the center of gravity may be used.
- the present invention does not provide a method depending on how to take a specific reference viewpoint.
- step 503c a number of projection points are set on the projection plane.
- the projection points are set so as to lie on a plurality of straight lines passing through the reference viewpoint R, and the projection points on the same straight line are collectively treated as a projection point sequence.
- FIG. 13 focusing on a straight line passing through the reference viewpoint R,
- an array for holding an image to be texture-mapped on the projection plane is secured in the memory of the image generating apparatus by the texture array securing unit 103c.
- the array to be reserved has texture information corresponding to the position of the projection point for each pixel, and color information (R, G, B) and existence probability information, for example, 8 bits each.
- step 503d the correspondence between the two-dimensional digital coordinates (U,) of the pixels in the texture array and the three-dimensional coordinates (Y, ⁇ ) of the projection point is also set.
- U, two-dimensional digital coordinates
- Y, ⁇ three-dimensional coordinates
- (X, ⁇ , ⁇ ) values may be set as a table for all (U, V) values
- the value of (Xj, Y, Zj) may be set only for the representative (Uj, Vj), and other correspondences may be obtained by a complementation process (for example, linear complementation).
- step 503d After the processing of step 503d is completed, color information and existence information of a pixel corresponding to each projection point secured in step 503d are determined based on the image of the subject acquired in step 502. I do. At that time, the projection point sequence S is sequentially scanned in the range of ⁇ , and the projection point T is sequentially scanned in the range of T, thereby performing a double loop process.
- the projection point sequence S to be operated is initialized to a start position (step 503e). Then, the projection point T to be scanned is initialized to the start position in the projection point sequence S, and
- j l (step 503f).
- the position of the projection point T is next determined.
- the markers (X *, Y *, ⁇ *) are obtained, and when the point at the position (X *, ⁇ *, ⁇ *) is photographed with each camera, the position corresponding to each of the image planes is determined. It is calculated using the relationship from Expressions 14 to 17 (Step 503g). At this time, a set of cameras for calculating the corresponding points S ⁇ ⁇ C ;
- the set of cameras ⁇ may be all cameras, or one or more cameras may be arbitrarily selected according to the positions of the virtual viewpoint P, the reference viewpoint R, and the projection point T.
- the corresponding point of each camera obtained here is set as G (i e i), and its digital coordinates are set as (u *, v *) (i ei).
- the color information determining means 103e next calculates the color information at the pixel (U *, V *) on the texture array corresponding to the projection point T by (u * , *) (i
- ⁇ Determined by mixing the color information in I) (step 503h).
- the mixing process for example, an average value of the color information of the corresponding points of each camera is obtained.
- the vectors representing the color information (R, G, B) at T and the corresponding point G are ⁇ and K, respectively, for example, if K is determined as in the following Expression 19, the projection point ⁇ ⁇ is viewed from the reference viewpoint R The closer the camera is to an angle, the greater the contribution of the mixing process.
- the corresponding point matching processing means 103d calculates the degree of correlation Q of the corresponding point G (i e i) of each camera with respect to the projection point T (step
- the degree of correlation Q is, for example, as shown in Equation 20 below, where Q takes a positive value.
- Equation 20 the higher the correlation of the corresponding points, the smaller the value of Qj.
- the color information of the projection point and the corresponding point are compared at only one point.
- the color information can be compared at a plurality of points near the projection point and the corresponding point.
- the degree of correlation Q in these areas is calculated by, for example, the following equation 21. It is.
- K (U, V) is an estimated value of color information at coordinates (U, V) of the texture array
- K (u) is an estimated value of color information at coordinates (U, V) of the texture array
- , v) indicate color information at the coordinates (u, v) of the image captured by the camera C.
- the method of calculating the degree of correlation is not limited to the above, and the present invention does not depend on a specific calculation method.
- the projection point T and the corresponding point are not limited to the above, and the present invention does not depend on a specific calculation method.
- the area composed of the corresponding pixel and the surrounding eight pixels is defined as a neighboring area ⁇ and a corresponding area ⁇ , respectively.
- how to determine the neighboring region ⁇ and the corresponding region ⁇ is not limited to this example.
- step 503i After the processing of step 503i is completed, the projection point T is updated (step 503 ⁇ 4).
- step 5031 If you have finished scanning, proceed to the next step 5031, if you have not finished scanning yet. If so, the process returns to step 503g.
- the existence probability information determining means 103f passes the reference viewpoint R based on the degree of correlation Q calculated in step 503i. For all the projection points T (jej) on the straight line, the degree of possibility (existence probability information) ⁇ of the object existing on the projection points is determined (step 5031). However, the existence probability information needs to satisfy the conditions of the following Expressions 22 and 23.
- the degree of correlation Q between the projection point and the corresponding point calculated in step 503i is calculated by, for example, the following Expression 24 and Expression 25.
- the presence probability information ⁇ . (Jej) is obtained by performing the conversion process represented.
- step 5031 After the processing of step 5031 is completed, the projection point sequence S is updated (step 503m), and it is determined whether the projection point sequence ⁇ has been scanned (step 503 ⁇ ). Here, if all the scanning has been completed, the process proceeds to the next step 503 ⁇ , and if not, the process returns to the step 503f.
- the viewed image is drawn and generated according to the existence probability information / 3 (step 503o).
- the coordinates of the image plane at the virtual viewpoint P are represented by (u, V).
- the color information K * of a certain pixel p * (u *, V *) on the image plane is represented by the color information ⁇ K * of the projection point sequence ⁇ T * I jej ⁇ on the line connecting P and p *.
- the image at the virtual viewpoint P can be obtained.
- K * is calculated as the following equation 27 instead of the equation 26, the reference viewpoint R and the temporary viewpoint R are calculated. Even if the position of the visual point P is different, K * must be within the effective color space.
- a general-purpose graphics library such as OpenGL or DirectX can store a projection plane configuration and a texture distribution system. Data such as the setting of the IJ and the viewpoint P may be passed, and the drawing process may be entrusted.
- the virtual viewpoint image generation process (step 503) is completed as described above, and the generated virtual viewpoint image is displayed on the image display means 4 (step 504).
- the virtual viewpoint image 6 displayed on the image display means 4 is, for example, as shown in FIG. 17, out of the image 7 of the subject, the degree of the correlation Q calculated in the step 5031 is low, I.e., the estimate j
- the unreliable spot 7A is rendered vaguely and blurred. Therefore, for example, unlike the conventional virtual viewpoint image 6 as shown in FIG. 1, the image does not appear to be missing, and the deterioration is such that the user does not notice.
- step 505 the continuation or end of the process is determined. If the process is to be continued, the process is repeated from the first step 501.
- the subject can be accurately detected in every case and every place as in the conventional means.
- a low-reliability estimate was obtained on the premise that distance estimation could not provide a sufficiently reliable estimate depending on the shooting conditions and site of the subject rather than trying to obtain a geometric model.
- the parts are vaguely drawn to reduce the contribution to image generation,
- areas where highly reliable distance data are obtained are clearly drawn to increase the contribution to image generation.
- the virtual viewpoint image generation device 1 of the present embodiment since the virtual viewpoint image is generated using texture mapping, the load on the device in the image generation processing can be reduced, and A virtual viewpoint image can be generated at high speed.
- the virtual viewpoint image generation device 1 does not need to be a dedicated device. For example,
- the program can be provided by being recorded on a recording medium such as a floppy disk or a CD-ROM, or can be provided through a network.
- the configuration of the virtual viewpoint image generation device and the generation method and processing procedure of the virtual viewpoint image described in the embodiment 1-1 are merely examples, and the gist of the present invention is that it has a multilayer structure.
- the transparency information of the projection plane to be formed is determined according to the reliability of the corresponding region between the images obtained by photographing a plurality of different viewpoint position force subjects. For this reason, this point does not greatly deviate, the scope is not limited, and it does not depend on a specific processing method or implementation.
- a system using the virtual viewpoint image generation device 1 is not limited to a one-way system as shown in FIG. 7, but can be applied to a two-way system.
- FIG. 18 is a schematic diagram showing an application example of a system to which the virtual viewpoint image generation device 1 of the embodiment 11 is applied.
- the virtual viewpoint image generation device 1 of the embodiment 11 is suitable for a system such as a videophone or a video conference, for example, as shown in FIG.
- a system such as a videophone or a video conference
- both are users and It can be applied to a system that supports visual communication by presenting each other's images, assuming it is a moving object.
- the image of UserB from the viewpoint desired by UserA is Img [A ⁇ B]
- Img [A ⁇ B] is the image of UserB taken by the subject photographing means (camera) 3 B on UserB side. It is originally generated and presented to the image display means 4A on the UserA side.
- Img [B ⁇ A] is also used as the image of UserA captured by the object capturing means (camera) 3A on the UserA side. And presented to the image display means 4B on the UserB side.
- the viewpoint transmitting means 201A and 201B and the data receiving units 202A and 202B of the Z posture sensor mounted on the user's head are input by the viewpoint position input means of each User.
- a desired virtual viewpoint is calculated by automatically following a user's head movement.
- the viewpoint input means does not necessarily have to take such a form. It is also possible to estimate the position / posture of the head based on the images of the user captured by the subject capturing means 3A and 3B, and provide the same function.
- Img [A ⁇ B] is generated by either the virtual viewpoint image generation device 1A on the UserA side or the virtual viewpoint image generation device 1B on the UserB side.
- the image of UserB captured by the camera 3B is transmitted to the virtual viewpoint image generation device 1A on the UserA side via the network 8, and based on the image, the virtual viewpoint image generation device 1A [A ⁇ B] is generated and presented on the image display means 4A.
- the virtual viewpoint image Img [A ⁇ B] is transmitted to the virtual viewpoint image generation device 1A on the UserA side and presented on the image display means 4A.
- the explanation is omitted, the same applies to Img [B ⁇ A].
- each unit constituting the image generation unit 103 in Fig. 6 can be shared by either the virtual viewpoint image generation device 1A on the UserA side or the virtual viewpoint image generation device 1B on the UserB side.
- the image generating apparatus 1A on the User A side implements the projection plane determining means 103a, the reference viewpoint determining means 103b, and the corresponding point matching means 103d, and the image on the User B side Generation processing unit 1B Security means 103c, color information determination means 103e, existence probability information determination means 103f, and rendering means 103g can be implemented.
- the explanation is omitted, the same applies to Img [B ⁇ A].
- an image generation device 1C different from the virtual viewpoint image generation devices 1A and 1B on the UserA side and the UserB side is provided at an arbitrary place on the network 8, and all or a part of the image generation means is provided. It is also possible to implement
- the power described for communication between the two users, UserA and UserB is not limited to this, and the number of users is not limited to this. S can.
- a virtual space used for communication is assumed separately from the real space where the user actually exists, images of other users according to the positional relationship are presented to each other, and the power and network will be Users can be presented with the feeling of sharing the above virtual space (cyberspace).
- FIG. 19 is a schematic diagram for explaining the virtual viewpoint image generation method according to the embodiment 1-2
- FIG. 19 (a) is a flowchart showing a process which is a feature of the embodiment 1-2
- FIG. b) is a flowchart illustrating an example of a specific processing procedure of a step of determining transparency information.
- the configuration of the virtual viewpoint image generation device 1 and the overall processing procedure can take the same form as the example described in the above-described Embodiment 11-11. Only a part will be described.
- a virtual viewpoint image is generated by using the existence probability information j3 determined in the step 5031.
- step 5031 Had generated a, in the first embodiment one 2, as shown in FIG. 19 (a), after the step 5031, the addition of step 503 P of determining the transparency by converting the existence probability information.
- step 503d for securing the texture array of the embodiment 1-1 the array for storing the color information and the existence probability information is secured.
- step 503d an array for holding the color information and the transparency information is secured.
- the transparency information ⁇ is calculated based on the existence probability information j3, and in the same way as in step 5031 of the embodiment: in this embodiment 1-2, the existence probability information A simple calculation is performed, and the transparency information is calculated in the next step 503p.
- step 503 ⁇ for performing the rendering process of the embodiment 1-2 instead of the equations 26 and 27 described in the embodiment 11-11, the equations 2 to 4 are used.
- D the color information K * of a certain pixel p * (u *, V *) on the image plane is calculated as in the following Expression 28.
- hi ⁇ (step 5032 ⁇ ).
- update the value of j to j jl (Step 5033p).
- the denominator is 0 (zero), so it cannot be calculated.
- Embodiment 1-1 As described above, according to the virtual viewpoint image generation method of Embodiment 1-2, as in Embodiment 1-1, a virtual viewpoint image in which partial image deterioration is not conspicuous can be easily obtained. , And can be generated at high speed.
- Example 11-11 in the image generation using the existence probability information as it is, when the reference viewpoint and the virtual viewpoint are different, the luminance may increase near the occluded region of the subject.
- the existence probability information is converted into transparency as in Example 1-2. In such image generation, there is an effect of preventing this phenomenon. Therefore, it is possible to obtain a virtual viewpoint image closer to the actual subject, with less deterioration of the image.
- Example 11-11 in the image generation using the existence probability information as it is, when the reference viewpoint and the virtual viewpoint are different, the color information is calculated by the calculation using the mathematical expression described later. In this case, for example, correction processing is required, but in the image generation for converting the existence probability information into transparency as in Example 112, the correction processing is not required. Such correction is unnecessary. Therefore, the image generation processing can be simplified.
- the virtual viewpoint image generation method described in Embodiment 1-2 is only an example, and the gist of the present embodiment is to convert the existence probability information into transparency information to generate a virtual viewpoint image. It is to do. Therefore, it does not depend on a specific calculation method or processing procedure within a range that does not greatly depart from this gist.
- the above color information corresponds to luminance information in the case of a black and white image, and can be processed in a similar manner.
- FIG. 20 is a schematic diagram for explaining the virtual viewpoint image generation method according to the thirteenth embodiment, and is a diagram illustrating an example of a projection plane group, a reference viewpoint, a virtual viewpoint, and a projection point.
- a virtual viewpoint is determined in step 501, and an image of a subject is acquired in the next step 502.
- step 503 of generating the virtual viewpoint image which is performed next, in step 503 of determining the projection plane, a projection specific to each camera is performed. Set the shadow plane group.
- the projection plane group is, for example, as shown in FIG. 20, the camera Ci (iei,
- the reference viewpoint R unique to the projection plane group ⁇ is set to the same position as the camera viewpoint C in the process of determining the reference viewpoint in step 503b.
- step 503c After the step 503b is completed, next, the process of the step 503c is performed according to the procedure described in the eleventh embodiment. Then, in the next step 503d, each pixel of the digital image photographed by the camera is back-projected onto the projection plane, and is associated with each pixel of the texture array on the projection plane.
- the conversion of the point (u, V) of the digital image to the point (X, y) on the image plane is represented by, for example, the above equation (17).
- the back projection to the point (X, ⁇ , Z) on the middle projection plane can be formalized as follows, for example.
- Z is a back projection image.
- the process from the step 503e to the step 503g is performed according to the procedure described in the embodiment 1-1. Then, in the process of determining the color information in the next step 503h, the projection points on the projection plane group ⁇ ⁇ are determined using only the color information of the image captured by the camera C. Set.
- the digital image captured by the camera can be used as it is as the color information of the texture array on the projection plane.
- step 503i to step 503 ⁇ processing is performed in the same procedure as in Example 11-11.
- the rendering means of step 503 ⁇ performs color information mixing processing on all overlapping projection points as viewed from the virtual viewpoint #.
- the color information is mixed on a straight line passing through the virtual viewpoint ⁇ with respect to the projection points on the projection plane groups ⁇ and ⁇ .
- the projection point on the projection plane L is represented by ⁇
- the color information at ⁇ is represented by ⁇
- the existence possibility information is represented by ⁇ .
- the color information of the image plane is determined as follows, for example.
- the color information K * of a pixel p * (u *, V *) on the image plane is the color information of a projection point sequence ⁇ T * I ie ijej ⁇ on a straight line connecting P and p *.
- a virtual viewpoint image in which partial image deterioration is not conspicuous is easily and rapidly obtained. Can be generated.
- Embodiment 13-13 if a unique projection plane group is set for each camera irrespective of the positional relationship between the cameras, the arrangement of the cameras may be complicated or irregular. Even if Image generation can be performed by a consistent processing method that does not affect the setting processing of the shadow plane group.
- the virtual viewpoint image generation method described in the first to third embodiments is an example, and the gist of the present embodiment is to convert the existence probability information into transparency information to generate a virtual viewpoint image. It is to do. Therefore, it does not depend on a specific calculation method or processing procedure within a range that does not greatly depart from this gist.
- the color information corresponds to luminance information, and can be processed in the same manner.
- distance estimation is not performed in every case and every place as in the conventional method, and an accurate geometric model of the object is obtained.
- the portions where the estimated value with low reliability is obtained are vaguely drawn to reduce the contribution to image generation, and the extreme image Deterioration is prevented, and the places where highly reliable distance data is obtained are clearly drawn to increase the contribution to image generation.
- the deterioration of the image at a place where the reliability of the estimation is low becomes inconspicuous.
- the color information of the projection plane group associated with the same camera is all the same, a texture memory for storing the color information can be shared when processing is performed by a computer (computer). Therefore, it is possible to reduce the load on the apparatus used for image generation, which consumes less memory than the number of projection planes.
- the camera corresponding to a certain projection plane is uniquely determined, it is easy to calibrate the lens, such as correcting lens distortion, by setting the correspondence between the two coordinates in advance. Speed and speed.
- the processing time of a device that generates a virtual viewpoint image based on images of a plurality of subjects can be shortened, or the load on the device can be reduced, and even a popular personal computer can be used. An image with little partial deterioration can be generated in a short time.
- the second embodiment is an embodiment mainly corresponding to claims 12 to 21.
- the basic mechanism in the second embodiment is the same as that in the first embodiment.
- a plurality of camera sets are prepared, and based on the degree of correlation obtained for each camera set.
- the feature is that the existence probability is calculated.
- components having the same function are denoted by the same reference numerals.
- the image generation method obtains a three-dimensional shape of an object shown in the image from a plurality of images having different viewpoints, and presents a three-dimensional image of the object, Alternatively, it is a method of generating an image when the object is viewed from an arbitrary viewpoint.
- the three-dimensional shape of the object is obtained by estimating the distance from the observer's viewpoint to each point on the surface of the object by setting the projection surface of the multilayer structure using a texture mapping method. Ask.
- a point hereinafter, referred to as a projection point
- a projection point on each projection surface that overlaps from the observer's viewpoint
- the degree of correlation between points (hereinafter referred to as corresponding points) is obtained. Then, based on the degree of correlation between the projection points overlapping when viewed from the viewpoint of the observer, it is estimated which of the overlapping projection points is close to the projection point where the surface of the object is present.
- any one of a plurality of projection points overlapping from the viewpoint of the observer is selected. It is not necessary to consider whether the surface of the object exists near the projection point of the above.If the surface of the object exists at a rate corresponding to the degree of correlation between the projection points, the vicinity of each projection point Think.
- a probability (hereinafter, referred to as an existence probability) that an object surface exists at each of the projection points or in the vicinity thereof is determined from the degree of correlation with each of the projection points. Then, when generating an image based on the three-dimensional shape of the subject, assigning the color information of the projection point to the color information of each point on the image to be generated, Allocate in proportions. In this way, it is difficult for the observer to observe the projection plane to estimate the distance to the surface of the object, render the ray portion ambiguous, and make discontinuous noise or the like inconspicuous.
- the existence probability may be obtained using a parameter function P (l) reflecting the probability density distribution.
- P (l) reflecting the probability density distribution.
- FIGS. 21 to 28 are schematic diagrams for explaining the principle of the image display method according to the present embodiment.
- FIG. 21 is a diagram for explaining the concept of a method of generating an image to be displayed
- FIG. Fig. 23 (a) and Fig. 23 (b) show how to calculate the degree of correlation of corresponding points
- Figs. 24 (a) and 24 (b) show corresponding points.
- Fig. 25 explains the problematic point when calculating the degree of correlation
- Fig. 25 illustrates the method for solving the problem when calculating the degree of correlation
- Figs. 26 (a) and 26 (b) show the probability of existence.
- FIGS. 27 (a), 27 (b), and 28 illustrate an example of a method for improving accuracy, and illustrate the features of the present embodiment.
- a virtual three-dimensional space is set in an image generating device such as a computer, and the image is displayed on the three-dimensional space.
- the point is photographed by the cameras installed at the viewpoints C and C.
- the degree of correlation (similarity) of the corresponding points G 1, G 2, G 3 on the images with respect to the shadow point T is determined, m i, m i + l, m i + 2, m
- projection point T projection plane L
- the projection point T corresponds to the projection point T.
- the degree of correlation Q of each corresponding point G j J "J is used.
- the degree of correlation Q is obtained using, for example, the following Equation 40, as in the first embodiment.
- the color information at point T is the average value of the color information K at each corresponding point G.
- the projection plane L is at the position of the set distance 1.
- the surface of the object B is shown at the corresponding point G 'of
- 'correlation degree Q' is as shown in Fig. 24 (b), and it is difficult to estimate which projection point T is near the surface of the object.
- the estimation was incorrect. In this case, the noise appears as discontinuous noise on the displayed image.
- each projection point T It is assumed that the surface of the object exists with a probability corresponding to the ratio of the magnitude of the correlation degree Q.
- the probability (existence probability) that the surface of the object exists at or near the projection point T be a projection point on a straight line lp drawn from the observer's viewpoint P, that is, the observer's viewpoint P It is necessary that the existence probability of the projection point T overlapping with each other satisfies the conditions as shown in Expression 41 and Expression 42 below.
- the existence probability satisfies the conditions of Equations 41 and 42. Therefore, the existence probability may be determined by a method other than the conversion processing represented by Expression 43 and Expression 44.
- the probability / 3 of the presence of the surface of the object at or near each of the projection points T is determined. For example, as shown in Fig. 25, the probability is subtracted from the viewpoint P of the observer. The color information K and the existence probability ⁇ for each projection point T on the straight line lp are determined.
- the degree of correlation Q of each projection point T of the straight line lp for example, as shown in FIG. If difficult, comparable A plurality of projection points having the existence probability appear. Therefore, the pixels corresponding to the projection points ⁇ on the plurality of projection planes are displayed with the same luminance, and the observer looking at the projection plane L from the observer's viewpoint P has an unclear sense of distance. Move on. However, since the surface image of the object is displayed at a plurality of overlapping projection points when viewed from the observer's viewpoint P, discontinuous noises caused by erroneous estimation of the distance to the object surface do not occur. Les ,. Therefore, it is possible to display a three-dimensional image of an object that looks natural to the observer without finding the accurate three-dimensional shape of the object to be displayed.
- the existence probability before performing the statistical processing that is, the existence probability ⁇ obtained from Expression 43 and Expression 44 is used. Is the evaluation reference value V. Then, a value obtained after performing statistical processing on the evaluation reference value V is defined as an existence probability.
- ⁇ is an average value
- ⁇ is a parameter representing variance
- the existence probability ⁇ is determined using, for example, the following Expression 48.
- 1 ⁇ and 1+ are the lower limit value and the upper limit value of the distance at which it is considered that the surface of the object exists on the projection plane L at the distance 1, as shown in FIG. 26 (b).
- Equation 5 Give 0.
- the degree of correlation Q is calculated using the corresponding points G and G corresponding to the projection point T.
- the rate / 3 can be obtained from the following equation (51).
- the color information K of the projection point T can be determined, for example, by using the following Expression 52 from the color information K obtained for each combination ⁇ and the existence probability.
- FIGS. 29 to 34 are schematic diagrams for explaining the image generation method according to the embodiment 2-1 according to the present invention.
- FIG. 29 is a flowchart showing an example of the overall processing procedure
- FIG. I s a flowchart showing an example of a processing procedure of steps for determining color information and existence probability of a projection point in FIG. 31
- FIG. 31 is a flowchart showing an example of steps for determining existence probability in FIG. 30, and
- FIG. 32 is a camera set
- FIG. 34 (a), and FIG. 34 (b) are diagrams illustrating a method of converting the information on the projection surface into the information on the display surface.
- the image generation method according to Example 2-1 uses the images captured from a plurality of viewpoints to acquire the three-dimensional shape of the object shown in the image, and obtain the three-dimensional shape of the acquired object. This is a method of generating a two-dimensional image to be displayed on each image display surface of an image display means having a plurality of image display surfaces, such as a DFD, based on the above.
- the image generation method includes, for example, as shown in FIG. 29, a step 1 of acquiring an image of an object photographed from a viewpoint C, a step 2 of setting a viewpoint P of an observer, and a step 3 of Step 3 of acquiring the three-dimensional shape, and obtaining the color information and the existence probability of the point (projection point) on the projection surface that represents the acquired three-dimensional shape.
- Generating a two-dimensional image to be displayed on the image display surface by converting the image into a two-dimensional image, and displaying the display points on the image display surface with luminance or transparency according to the color information and the existence probability. .
- step 3 sets a projection plane L of a multilayer structure and step 302 determines a reference viewpoint for acquiring a three-dimensional shape of an object, as shown in FIG. And a projection point which is a set of projection points T on each projection plane L overlapping when viewed from the reference viewpoint.
- Step 304 for determining ⁇ , and color information of the projection point T And a step 305 of securing an array for storing the existence probability and the existence probability, and a step 306 of determining the color information and the existence probability of the projection point T.
- the step 306 includes a step 30601 of initializing the projection point sequence and a step 3060 of initializing the camera set # and voting data.
- a step 30603 of initializing a projection point T on the projection point sequence a step 30604 of determining color information of the projection point, and a step G corresponding to the point G corresponding to the projection point T.
- Step 30606 for repeating the processing of step 30604 and step 30605, step 30607 for voting each degree of correlation Q obtained by the camera set ⁇ , and camera set ⁇
- step 30604 is repeated for all the camera sets, and the processing up to the step 30607 is repeated, and the correlation degree Q voted in the step 30607 is used.
- step 30610 for repeating the processing from step 30602 to step 30609 for all projection point sequences.
- step 30609 initializes the camera set S.
- Step 30609a and the evaluation standard value h j'h from the correlation Q obtained using the camera set ⁇
- the step 30609d of determining the existence probability of each projection point T, the step 30609e of updating the camera set j by jj, hh and repeating the processing from the step 30609b to the step 30609d, and the step 30609e of each camera set ⁇ By integrating the determined existence probabilities j3, each of the above hj, h
- the camera is located at an installation position, and is arranged one-dimensionally on a certain straight line as shown in FIG. 21, for example.
- the viewpoint C of the camera is Not only on a line but also on a plurality of straight lines or curves may be arranged one-dimensionally. Also, they may be arranged in a two-dimensional lattice on a plane or curved surface that is not one-dimensional.
- the image to be acquired may be a color image or a black and white image.
- each point (pixel) on the image is red (R), green ( The description assumes that a color image represented by color information using the three primary colors G) and blue (B) is acquired.
- the viewpoint P of the observer who observes the three-dimensional image (image) of the object displayed on the DFD is set in a virtual space on an image generation device such as a computer (Step 2).
- step 3 a three-dimensional shape of the object used to generate the image is obtained (step 3).
- step 3 a projection plane L for estimating the three-dimensional shape (surface shape) of the object is set in the virtual space (step 301). At this time, the projection plane L is
- the set interval of the projection plane L may or may not match the interval of the image display surface of the DFD for displaying an image, for example.
- the reference viewpoint may be, for example, the observer's viewpoint, or may be determined to any point on the three-dimensional space other than the observer's viewpoint.
- a projection point sequence composed of a set of projection points T on each projection plane L overlapping with each other from the observer's viewpoint P or the reference viewpoint, and on the acquired image corresponding to each projection point T A corresponding point G is set (step 303).
- the projection point T is represented by a point ( ⁇ , ⁇ , Z) in the virtual space (three-dimensional space), and is a two-dimensional xy on the image plane of the image captured from the viewpoint C.
- the coordinates of the corresponding point G are given by (X, y).
- the coordinates (X, y) of the corresponding point G are obtained by projecting the projection point (X, Y, Z) of the previous' J'J'J "J onto the image plane of the image in which the viewpoint C force is also captured.
- the general 3-by-4 transformation row described in the first embodiment may be used.
- an image to be handled is a so-called digital image, which is represented by a two-dimensional array on a memory of the device.
- the coordinate system representing the position of the array is referred to as a digital image coordinate system, and the position is represented by (u, V).
- the position of each pixel on the digital image is represented by a variable u taking any one of integer values from 0 to 639, and a variable u from 0 to 479 Is indicated by a variable V that takes one of the integer values of Then, the color information of the point is given as data obtained by quantizing the information of red (R), green (G), and blue (B) at the address into 8 bits or the like.
- the image coordinate system (u, V) is associated on a one-to-one basis and has, for example,
- Equation 53 for example, it is assumed that the u axis of the digital image coordinate system is parallel to the X axis.
- k and k are unit lengths of the u-axis and the V-axis of the digital image coordinate system with respect to the (X, y) coordinate system in virtual space, respectively, and ⁇ is the u-axis and the V-axis. It is an angle.
- step 303 the coordinates (X, ⁇ , Z) of the projection point T are associated with the coordinates of the digital image (u, V). For example, this mapping is performed by giving a value of (X, ⁇ , ⁇ ) as a table for all (U, V), and then generating a representative (u, V). The value of (X, Y, ⁇ .) May be set only for this, and the other points may be obtained by interpolation processing such as linear interpolation.
- (u, V) is a force that takes a discrete value.
- a continuous value is used unless otherwise specified, and an appropriate distance is used when accessing the two-dimensional array.
- a decentralization process shall be performed.
- a combination (camera set) ⁇ ⁇ ⁇ of camera viewpoints C to be used for obtaining the degree of correlation Q is determined (step 304).
- ⁇ ⁇ C, C, C, C ⁇
- ⁇ ⁇ C, C, C, C ⁇ .
- the method of determining the camera set ⁇ is arbitrary, and in the example shown in FIG. 32, ⁇ , ⁇
- Set ⁇ can be prepared in advance according to the installation status of the camera (viewpoint C).
- step 304 After the processing of step 304 is completed, next, for example, the color information K of the projection point T and the information of the probability of the existence of the object / 3 are stored in the memory (storage means) of the image generation device.
- An array to be stored is secured (step 305).
- the array for storing information is such that the information ⁇ and ⁇ of the projection point T are, for example, the color information of red (R), green (G), blue ( ⁇ ) and the presence of the object.
- step 305 After the process of step 305 is completed, the color information of each of the projection points T and the probability of the presence of an object are determined using the obtained plurality of images (step 306). Previous
- step 306 for example, for a certain projection point sequence, a process of calculating the color information K and the degree of correlation Q of each projection point T on the projection point sequence by using the designated camera set h,
- step 306 first, as shown in FIG. 30, a projection point sequence is initialized (step 30601).
- step 30602 the camera set S and the voting data of the correlation degree are initialized.
- the color information K of the point T is determined (step 30604). At this time, the color information ⁇ of the projection point ⁇
- J j.h j is, for example, an average value of the color information ⁇ ⁇ of the corresponding points G included in the camera set ⁇ .
- the projection point ⁇ and the corresponding point G included in the selected camera set ⁇ are The degree of correlation Q is calculated (30605).
- the correlation Q is, for example,
- the projection point T is updated, and it is determined whether or not the processing of the steps 30604 and 30605 has been performed on all projection points on the projection point sequence to be processed (step 30606).
- the process returns to step 30604 to repeat the processing.
- step 30604 and step 30605 When the processing of step 30604 and step 30605 is performed on all projection points on the projection point sequence to be processed, the result, that is, the selected camera set ⁇ is included.
- Corresponding point G Force The color information K and the degree of correlation Q found are voted (step 30607), J jh j, h
- step 30607 After the processing in step 30607 is completed, the camera set ⁇ is updated, and the processing h
- step 30608 It is determined whether or not there is a camera set for which the processing from step 30604 to step 30607 has not been performed on the projection point sequence targeted for (step 30608). If there is a camera set for which the processing up to step 30607 has not been performed, the flow returns to step 30603 to repeat the processing.
- step 30607 After the processing up to step 30607 is performed, the color information K and the existence probability of the projection point T are determined from the color information K and the degree of correlation Q voted in step 30607 (30604).
- step 30609 for example, as shown in FIG. 31, first, the camera set ⁇ is initialized (step 30609a).
- an evaluation reference value V is calculated from the degree of correlation Q of each projection point T calculated using the camera set ((step 30609b).
- the evaluation reference value V is, for example, the number
- the distribution function p (1) of the existence probability is obtained (step 30609c).
- the distribution function p (1) is obtained using, for example, Equations 45, 46, and 47, for example, h h.
- the existence probability ⁇ is, for example, the expression 48, the expression
- step 30609e the process returns to step 30609b and the processing is repeated.
- step 30609f the color information K is, for example,
- Equation 51 Is calculated using the above equation (52). Further, the existence probability / 3 is calculated by, for example, using Equation 51.
- step 30609f ends, the process of step 30609 ends. Then, the projection point sequence is updated, and it is determined whether there is a projection point sequence that has not been subjected to the processing from step 30602 to step 30609 (step 30610). If there is a projection point sequence for which the processing from step 30602 to step 30609 has not been performed, the flow returns to step 30602 to repeat the processing.
- step 3 When the processing from step 30602 to step 30609 is performed on all projection point sequences, the processing in step 306 (step 3) is completed, and the force S to obtain the three-dimensional shape of the object is obtained. it can.
- a two-dimensional image to be displayed on each image display surface of the DFD is then determined based on the three-dimensional shape of the object. Generate.
- the image generation The color information KD h and the existence probability ⁇ I of the display point of the facing surface I I The color information ⁇ and the existence probability ⁇ of the projection point ⁇ that coincides with the display point.
- the set interval of the projection plane L does not need to match the installation interval of the image generation plane LD. It is also necessary to match the set number of the projection plane L and the set number of the image generation plane LD. Absent. That is, depending on how to set the projection plane L, for example, as shown in FIG. is there. In such a case, the color information KD and the existence probability ⁇ of the intersection (display point) A of the straight line lr drawn from the observer's viewpoint ⁇ and each of the image generation planes LD are obtained by the following procedure.
- the color information KD of each of the display points ⁇ ⁇ ⁇ is, for example, a projection point T on the straight line lp, and the display point A (image generation plane LD) is closest to the display point (image generation plane).
- the color information KD of the display point A may be the color information K of the projection point T closest to the display point A instead of the average value.
- the existence probability ⁇ of each display point A is a value obtained by adding the existence probability ⁇ of the projection point T whose display point ⁇ (image generation plane LD) is the nearest display point (image generation plane). And at this time, if a set of projection planes L where a certain image generation plane LD is the closest image generation plane is defined as ⁇ LI ⁇ ⁇ R ⁇ , the existence probability ⁇ of the display point ⁇ ⁇ on the image generation plane LD is Each projection plane L
- the image generation plane LD is closest to the image generation plane.
- the projection planes L, L, and L are formed as a surface. Therefore, the color information KD of the display point A is
- the average value of the color information ⁇ , ⁇ , K of the projection points T, T, ⁇ is set.
- the existence probability ⁇ of A is the sum of the existence probability / 3 of the projection points ⁇ , ⁇ , ⁇ , ⁇ . Same rr 1 2 3 1 2 3 As described above, the color information KD of the display point A on the image generation plane LD is, for example, the projection point ⁇ , T
- the installation interval of the image generation plane D and the set interval of the projection plane L are different, and the distance between two consecutive image generation planes LD, LD is different.
- the probabilities ⁇ and ⁇ are the existence probabilities of the projection points ⁇ on the respective projection planes L,
- the image generation planes LD may be distributed according to the ratio of the distances to the LDs. At this time, generally,
- the existence probability ⁇ of the display point A on the image generation plane LD can be given by the following equation 55 using the existence probability ⁇ of each projection point ⁇ .
- w is a coefficient indicating the degree of contribution of the projection plane L to the image generation plane LD.
- the projection is performed between the two image generation surfaces LD 1 and LD 2.
- the image generation on the projection plane L is performed.
- Equation 56 The contributions w and w to the plane LD and LD are given by the following Equation 56, respectively.
- the distances from the projection plane L to the image generation planes LD and LD are B and B, respectively.
- Equation 57 w are given by Equation 57 below.
- the existence probability ⁇ of the LD display point ⁇ is as shown in the following Expression 58.
- a two-dimensional image to be displayed on each image display surface of the DFD can be obtained.
- the points (pixels) on each image display surface of the DFD are displayed with the color information ⁇ ⁇ ⁇ ⁇ assigned to each point on each image generation surface LD (Step 5).
- the DFD is a luminance modulation type
- the color information KD of each display point A of each of the image generation planes LD may be displayed at a luminance according to the existence probability ⁇ .
- the transmittance at each display point ⁇ may be set to a transmittance according to the existence probability ⁇ and displayed.
- FIGS. 35 to 37 are schematic diagrams showing a schematic configuration of an apparatus and a system to which the image generation method of the embodiment 2-1 is applied.
- FIG. 35 is a block diagram showing a configuration example of the image generation apparatus.
- FIG. 36 is a diagram showing a configuration example of an image display system using an image generation device, and
- FIG. 37 is a diagram showing another configuration example of an image display system using an image generation device.
- 6 is an image generation device
- 601 is a subject image acquisition unit
- 602 is a reference viewpoint setting unit
- 603 is a projection plane setting unit
- 604 is a projection plane information storage area securing unit
- 605 is color information / existence probability determining unit.
- Reference numeral 606 denotes projection plane information-display plane information conversion means
- 607 denotes image output means
- 7 denotes image display means (DFD)
- 8 denotes subject image photographing means
- 9 denotes observer viewpoint input means.
- the image generating apparatus 6 to which the image generating method according to the embodiment 2-1 is applied includes, for example, as shown in FIG. 35, a subject image acquiring unit 6001 for acquiring a plurality of subject images having different photographing conditions.
- observer viewpoint setting means 602 for setting the viewpoint of the observer who views the image to be generated, and setting of a projection plane for determining the existence probability, a projection point sequence, a corresponding point, a camera set, and the like.
- Color information / existence probability determining means 605 for determining the existence probability (existence probability), and projection surface information for converting the color information of the projection point and the information of the existence probability to the color information and the existence probability of the display surface.
- An information conversion unit 606 and an image output unit 607 are provided. At this time, the image output from the image output unit 607 is displayed on the image display unit 7 having a plurality of overlapping display surfaces, such as a DFD.
- the subject image acquiring means 601 acquires an image of a subject (object) photographed by the subject image photographing means (camera) 8. It should be noted that the image to be obtained may be obtained directly from the image photographed by the subject image photographing means 8, or may be a magnetic or electrical image on which the image photographed by the subject image photographing means 8 is recorded. Alternatively, the optical recording medium power may be obtained indirectly.
- the observer viewpoint setting means 602 includes, for example, Based on the information input using the image condition input means 9, the relative positional relationship between the observer's viewpoint and the image display means 7, such as the distance from the observer's viewpoint and the line of sight, is set. Further, the image condition input means 9 may be a means for detecting the posture and the line of sight of the observer and inputting information according to the posture and the line of sight.
- the corresponding point G on each image is also set.
- the projection plane etc. setting unit 603 may set and set a camera set ⁇ based on the condition input by the image condition input unit 9.
- the projection plane information storage area securing means 604 is configured to determine the color of each projection point T on each projection plane.
- An area for storing the information ⁇ and the existence probability is stored, for example, on a memory provided in the apparatus.
- the color information / existence probability determining means 605 calculates the color information of the projection point T from the color information of the corresponding point G on the image corresponding to the projection point T based on the principle described above. K
- the projection plane information / display plane information conversion unit 606 converts the color information and the existence probability of the projection plane into the image generation plane, that is, the image display unit.
- the image to be displayed on the display surface of step 7 is converted into color information and brightness distribution ratio of a point (display point) on the surface that generates the image.
- the image generation device 6 performs the processing from step 1 to step 5 described in the embodiment 2-1 to generate an image to be displayed on the DFD. That is, the image generating device 6 does not need to perform the process of obtaining an accurate three-dimensional shape of the object as in the related art. Therefore, an image displayed on the DFD can be generated at high speed and easily even in a device without high processing capability.
- the image generation device 6 can also be realized by, for example, a computer and a program executed by the computer. In this case, the processing described in Embodiment 2-1 is performed. What is necessary is just to make the computer execute a program in which instructions corresponding to the management procedure are described. At this time, the program may be provided by being recorded on a magnetic, electrical, or optical recording medium, or may be provided using a network such as the Internet.
- An image display system using the image generation device 6 may be configured as shown in Fig. 36, for example.
- the subject image photographing means 8 is arranged so that an observer User observes the image display means (DFD) 7, closes to a space, is installed in a place, may be, It may be installed in a remote place.
- the photographed image should be transferred to the image generating device 6 using a network such as the Internet.
- an image display system using the image generation device 6 is used for a videophone or a video conference in which a certain observer User can observe only a certain subject Obj. It can also be applied to a two-way communication system.
- the image generation devices 6A and 6B, the image display means (DFD) 7A and 7B, the subject image photographing means 8A and 8B, Reference viewpoint setting means 9A and 9B may be provided. Then, for example, if the image generation devices 6A, 6B installed in the space where the respective observers UserA, UserB are located are connected via a network 10 such as the Internet, the observer UserA can use the subject image photographing means 8B.
- the three-dimensional image of the observer UserB generated from the photographed image can be observed by the image display means 7A.
- the observer UserB can observe the three-dimensional image of the observer UserA generated from the image photographed by the subject image photographing means 8A on the image display means 7B.
- each of the image generating devices 6A and 6B does not need to have a configuration as shown in Fig. 35.
- Either one may be a general communication terminal not provided with the configuration means as shown in FIG. Further, the respective components as shown in FIG. 35 may be allocated to the image generating devices 6A and 6B.
- the image generation devices 6A and 6B are installed in the space where the observers UserA and UserB are located. Even if it is not installed, a three-dimensional image of an object to be displayed on the image display means (DFD) 7A, 7B can be obtained using the image generation device 6C on the network 10.
- DMD image display means
- the subject image photographing means 8 shows photographing means composed of four cameras, but the number of cameras may be two or three. , 5 or more.
- the cameras may be arranged one-dimensionally on a straight line or a curved line, or arranged in a two-dimensional lattice on a plane or a curved surface.
- the camera set ⁇ is set to the step 30 h.
- Force set in advance and performing processing in step 4 for example, while performing processing to generate an image to be displayed by programmatic processing, moving a camera set that meets the conditions specified by the observer May also be set.
- the observer inputs conditions such as the distribution of the correlation degree Q or a threshold value from the image condition input means, and performs the process of step 306 while searching for a camera set that meets the conditions. For example, it is considered that a three-dimensional image close to the image desired by the observer can be displayed.
- the point (pixel) on the image is expressed by color information using the three primary colors of red (R), green (G), and blue).
- the image display method of the embodiment 2-1 is not limited to the color image, and each of It is also possible to obtain a three-dimensional shape of the object by acquiring a black-and-white image in which a point (pixel) uses luminance (Y) and color difference (U, V).
- the brightness information (Y) is used as information corresponding to the color information, and a three-dimensional shape is acquired by the procedure described in Embodiment 2-1. The above two-dimensional image should be generated.
- FIGS. 38 to 42 are schematic diagrams for explaining the arbitrary viewpoint image generation method according to the embodiment 2-2.
- FIG. 38 is a flowchart illustrating an example of the overall processing procedure
- FIG. 39 is a diagram illustrating the principle of rendering
- FIG. 40 is a diagram illustrating a problem when generating an arbitrary viewpoint image
- FIGS. 41 (a) and 41 (b) are diagrams for explaining a method of solving a problem in generating an arbitrary viewpoint image
- FIG. 42 is a flowchart showing an example of a processing procedure for converting the existence probability into transparency. is there.
- the three-dimensional shape of the subject acquired in the step 3 is used to display on the respective image display surfaces of an apparatus having a plurality of image display surfaces, such as the DFD.
- the method of generating a two-dimensional image has been described as an example, the three-dimensional shape model of the subject is not limited to this, and may be used when generating a two-dimensional image of the subject viewed from an arbitrary viewpoint. Can be.
- the difference from the embodiment 2-1 is that, as shown in FIG. 38, after step 3 described above, rendering, that is, a two-dimensional image of the three-dimensional shape of the subject viewed from the observer's viewpoint
- the processing of step 11 is performed.
- the process of acquiring the three-dimensional shape of the subject in steps 1 to 3 is as described in the embodiment 2-1 and therefore the detailed description is omitted.
- each point (pixel) on the displayed arbitrary viewpoint image is displayed as shown in FIG.
- the color information K of each of the projection points T is weighted by the value of the existence probability and mixed, and the color information K of the point A on the generated image is, for example, Calculated by formula 59 below
- the shape of the subject and the reference Depending on the positional relationship between the point R and the virtual viewpoint P, the color information of the point on the generated image may be significantly different from the color information of the surface of the actual object, or may not fit in an effective color space.
- the two projection planes L 1 and L 2 are placed on the actual object in a positional relationship as shown in FIG.
- the projection points T ′ and T ′ overlapping when viewed from the viewpoint R are as described in Embodiment 2-1.
- the color information K of the point A on the generated image is K '
- the mixing process for obtaining the color information of each point in the generated image is performed sequentially from a projection point far from the viewpoint of the generated image to a projection point close to the viewpoint of the generated image.
- the color information obtained by the mixing process is obtained by internally dividing the color information at the projection point and the color information obtained by the mixing process up to the previous projection point at a ratio according to the transparency. Further, at this time, the color information obtained by the mixing process is an internal part of the color information at a certain stage and the next color information.
- a projection plane L ′ 1, 2,...
- a vector K having components of red (R), green (G), and blue (B) and representing color information of the projection point are set.
- the color space V is represented by Equation 60 below.
- the color information D m is an internally dividing point between the vector K and the color information D in the color space V as shown in FIG. 41 (b). Therefore, the color information D is D if K ⁇ V, D.
- the color information D of the point A of the generated image always becomes appropriate. It can be stored in the color space V.
- the existence probability is set to the transparency ⁇ . Perform the conversion process.
- the transparency a of the projection point T is set to a ⁇ (step 1102).
- the transparency ⁇ is ⁇ ⁇ 1, for example, the transparency is obtained from the following equation 64 (step 1105).
- the strength is set to 1 (step 1106).
- the transparency ⁇ is not limited to the expression 64, but may be obtained using another expression.
- the value may be set to a value other than 1.
- the conversion process ends. After that, a mixing process is performed using the above formulas 62 and 63, and color information D of the point A on the arbitrary viewpoint image is obtained. If this process is performed for all points (pixels) on the arbitrary viewpoint image, an arbitrary viewpoint image can be obtained from the observer's viewpoint P.
- the basic configuration of an image generation device that generates such an arbitrary viewpoint image is the same as that of the image generation device described in the embodiment 2-1.
- a means for performing the above-described mixing process may be provided. Therefore, description of the device is omitted.
- a camera set that meets conditions specified by the observer may be dynamically set while performing processing for generating an image to be displayed by program processing.
- an observer inputs a condition such as a distribution of the correlation degree Q or a threshold value from the image condition input means.
- step 306 Is input, and if the process of step 306 is performed while searching for a camera set that meets the conditions, it is considered that a three-dimensional image close to the image desired by the observer can be displayed.
- the image to be acquired is a black-and-white image that can be either a color image or a black-and-white image
- luminance information is used as information corresponding to the color information.
- the image generation method sets a plurality of projection planes when acquiring the three-dimensional shape of the subject, and sets points on each of the projection planes that overlap when viewed from the reference viewpoint.
- the probability that the surface of the object exists is given.
- the object is not to be obtained as an accurate three-dimensional shape of the object, assuming that the surface of the object is on one of the projection points overlapping each other when viewed from the reference viewpoint as in the past.
- the three-dimensional shape of the object is acquired assuming that the surface of the object exists on the projection points with the existence probability.
- an evaluation reference value is calculated from the degree of correlation of each projection point, and the evaluation reference value is calculated with respect to the evaluation reference value.
- the probability of existence may be determined based on the distribution function of the probability of existence obtained by performing statistical processing on the basis of the probability. As described above, when the existence probability is determined by performing the statistical processing, it is possible to prevent a decrease in the reliability of the existence probability due to noise on the acquired image.
- the third embodiment is an embodiment mainly corresponding to claims 22 to 29.
- the three-dimensional shape of the subject shown in the image is acquired based on a plurality of images (multi-focus images) taken from one viewpoint while changing the focusing distance. Then, an image of the subject viewed from an arbitrary viewpoint (virtual viewpoint) is generated.
- the present embodiment provides one The feature is that multiple images taken at different focal distances from the viewpoint are used.
- the three-dimensional shape of the subject is represented by a multi-layer plane using a tester mapping technique.
- components having the same function are denoted by the same reference numerals.
- FIGS. 43 to 51 are schematic diagrams for explaining the principle of the image generation method according to the present embodiment.
- FIGS. 43 and 44 show examples of setting a projection plane and a reference viewpoint.
- FIGS. 46 to 48 are diagrams for explaining a method for determining the color information and the degree of focus of the projected point
- FIGS. 46 to 48 are diagrams for explaining a method for determining the existence probability of the projected point
- FIG. FIG. 50 illustrates a problem in the image generating method of the present embodiment
- FIG. 51 illustrates a method of solving the problem in the image generating method of the present embodiment.
- the image generation method of the present invention as described above, a subject shown in an image based on a plurality of images (multifocal images) taken at different focal distances from one viewpoint The three-dimensional shape is obtained, and an image of the subject viewed from an arbitrary viewpoint (virtual viewpoint) is generated. At this time, the three-dimensional shape of the subject is represented by a multilayer plane using a texture mapping technique.
- the conventional model In the acquisition method it is considered that the surface of the subject exists at any one of the projection points ⁇ .
- the projection point of the projection point ⁇ on which the surface of the subject is located is determined, for example, by the degree of focus of each of the projection points ⁇ . Therefore, first, the color information K and the degree of focus Q of each projection point T overlapping from the reference viewpoint R are determined.
- the color information K of the projection point T is, for example, the average value of the color information / c of the corresponding points G, or the color information / c of the spatially corresponding corresponding points G.
- the degree of focus of the projection point T is determined by the sharpness or blurring of the image at a point or a minute area on the image.
- Literature 8 A.P.Pentland: A New bense for Depth of Field, IEEE Trans.On Pattern Analysis and Machine Intelligence, Vol.PAM 9, No.4, pp.523-531 (1987).
- the focus degree Q is obtained, for example, by comparing the magnitude of the local spatial frequency of each of the corresponding points G.
- the D-th th from Focus theory or Depth from Defocus theory is a method of analyzing a plurality of images having different focusing distances and measuring the surface shape of the object. At this time, for example, it can be estimated that the surface of the object is located at a distance corresponding to the focal distance of the image having the highest local spatial frequency among the images photographed with the focal distance changed. Therefore, the degree of focus Q of the projection point T is calculated using, for example, a local spatial frequency evaluation function represented by the following equation 65.
- f is the gray value of the pixel
- D is the constant for normalization
- (-Lc, -Lr)-(Lc, Lr) and (xi, yi)-(xf, yf) is a small area for performing variance evaluation and smoothing, respectively.
- Such processing is performed for all the projection points ⁇ overlapping with each other as viewed from the reference viewpoint R, and as shown in FIG. 46, the color information and the focus degree Q of each projection point T are obtained. Is determined, the distance at which the surface of the subject exists is estimated based on the height of the degree of focus Q of each projection point T. At this time, the focus of each overlapping projection point T viewed from the reference viewpoint R
- FIG. 47 (a) for example, as shown in FIG.
- the focus degree Q of the projection points T and T * is a value n n that is slightly higher than the focus degree Q of the other projection points T.
- the surface of the subject exists at one of the projection points ⁇ and T *.
- the focal point Q of both projection points T and T * is not a characteristic large value. Therefore, if either projection point is selected, its reliability is low. In some cases, an incorrect projection point may be selected. If the estimation (selection) of the projection point where the surface of the subject exists is incorrect, large noise appears on the generated image.
- the distance of the subject surface is not specified to any one point, that is, any one of the overlapping projection points T viewed from the reference viewpoint R, and FIG.
- an existence probability / 3 corresponding to the height of the degree of focus Q of each projection point T is given.
- the existence probability / 3 needs to satisfy the conditions of the following Expressions 66 and 67 in a set of the existence probabilities of all the projection points T overlapping when viewed from the reference viewpoint R.
- the process of determining the existence probability at each of the projection points T is performed in all directions, so that the three-dimensional shape of the subject is obtained. Can be obtained.
- the image is generated by setting the virtual viewpoint P on a space where the projection plane L is set. Determine the color information for each point on the image.
- the color information K of the point A on the generated image is obtained from the color information K of the projection point T overlapping the point A as viewed from the virtual viewpoint P and the existence probability / 3, for example,
- the image generation method of the present invention can be simply implemented by texture mapping, which is a basic technique of computer graphics. Therefore, the better the processing can be done with the popular 3D graphics hardware installed in a popular personal computer, the lighter the computer load will be.
- the degree of focus Q is calculated for each projection point T overlapping when viewed from a certain viewpoint such as the reference viewpoint, and the existence probability is determined. Therefore, depending on the shape of the subject and the positional relationship between the reference viewpoint and the virtual viewpoint, two projection points having a very high probability of existence exist among a plurality of overlapping projection points viewed from the virtual viewpoint P. The above may be included. In such a case, if the color information of each projection point is mixed at a ratio according to the existence probability, the color information of the point on the generated image may exceed the range of the valid color information.
- the color information at the projection points ⁇ , ⁇ , ⁇ ′, T ′ is ⁇ , K, respectively.
- K ′, K ′, and the object existence probability are ⁇ 1, ⁇ 2, ⁇ ′, ⁇ .
- the existence probabilities ⁇ 1, ⁇ 2, ⁇ ', and ⁇ ' of the subject are determined on a straight line passing through the reference viewpoint R.
- the surface of the subject ⁇ bj is located near the projection points T ′ and ⁇ .
- the existence probability at the projection points T 'and T is higher than that at the projection points T and T'.
- the color information ⁇ at the point ⁇ ⁇ on the image plane of the virtual viewpoint ⁇ is calculated from the expression 69 by the projection points ⁇ ′, T overlapping the point A on the image plane as viewed from the virtual viewpoint P.
- ⁇ ⁇ ⁇ [ ⁇ [+ ⁇ 2 ⁇ 2
- Equation 72 can be approximated by Equation 70 and Equation 71 as Equation 73 below. [0371] [Number 73]
- a clipping process is required so as to fall within the range.
- the image generation method of the present invention transparency having a plurality of gradations from transparent to non-transparent is set for each projection point based on the existence probability of the projection point.
- the mixing process for obtaining the color information of each point in the generated image is performed sequentially from a projection point far from the viewpoint of the generated image to a projection point close to the viewpoint of the generated image, and the processing up to a certain projection point
- the color information obtained by the mixing process is obtained by internally dividing the color information at the projection point and the color information obtained by the mixing process up to the previous projection point at a ratio according to the transparency.
- the color information obtained by the mixing process is an internal part of the color information at a certain stage and the next color information.
- a projection plane L ′ 1, 2,...
- a vector K having components of red (R), green (G), and blue (B) and representing color information of the projection point T are set.
- the color space V is represented by the following Expression 74.
- the transparency ⁇ of the projection point T is set so as to satisfy the condition of the following Expression 75.
- the color information D is a subdivision point between the vector information and the color information D in the color space V from the relationship between the mathematical expressions 75 and 76. Therefore, the color information D is as shown in FIG.
- the color information K and the transparency ⁇ of the projection point T are calculated by satisfying Expressions 74 and 75.
- the color information D of the point A of the generated image can always be stored in an appropriate color space V.
- each of the projection points ⁇ , ⁇ , ⁇ ′, T ′ is
- Equations 79 and 80 are set as given by Equations 79 and 80 below.
- a mixing process is sequentially performed from a projection point far from the virtual viewpoint to a projection point close thereto, and a mixing process up to a certain projection point is performed.
- the obtained color information is the color information at that projection point and the projection points before that And the color information obtained in the mixing process up to the above is internally obtained at a ratio according to the transparency.
- the color information D of the point A of the image viewed from the virtual viewpoint P is given by the following equation 81.
- the expression 81 becomes the following expression 82 from the expressions 70, 71, 79, and 80, and is a good approximation of the original color information.
- FIGS. 52 and 53 are schematic diagrams for explaining a mathematical model of the image generation method of the present invention.
- FIG. 52 is a diagram showing a relationship between projection points, corresponding points, and points on an image to be generated.
- FIG. 53 is a diagram for explaining a method of converting points in space and pixels on an image.
- color information of a point on the image viewed from the virtual viewpoint is obtained by perspective projection transformation or Obtain luminance information.
- the matrix projected to the point (x, y) on the generated image is given by a 3-by-4 matrix.
- the projection matrix and the matrix ⁇ representing the perspective projection transformation of the focal length f around the origin are as described in the first embodiment and the like.
- FIGS. 54 to 57 are schematic diagrams for explaining an image generation method according to the embodiment 3-1 according to the present invention.
- FIG. 54 is a flowchart showing an image generation procedure
- FIG. FIG. 56 is a flowchart for explaining a method
- FIG. 56 is a flowchart showing a specific example of the process of step 10305 in FIG. 54
- FIG. 57 is a diagram for explaining a rendering method.
- the image generation method according to the embodiment 3-1 is a method for generating an image using the above-described principle, and acquires a plurality of images having different focusing distances as shown in FIG. Step 101, a step 102 for setting a viewpoint (virtual viewpoint) of the observer, a step 103 for acquiring a three-dimensional shape of the subject based on the acquired image, and a step 103 for acquiring the three-dimensional shape of the subject based on the acquired image. Generates an image of a three-dimensional shape viewed from the virtual viewpoint.
- the step 103 includes a step 10301 of setting a projection plane of the multilayer structure, a step 10302 of determining a reference viewpoint for obtaining a three-dimensional shape of the object, and a projection point sequence.
- Step 10303 of setting points and the like step 10304 of securing the area for storing the texture system I, that is, the color information and the existence probability of the projection point, and step 10305 of determining the color information and the existence probability of the projection point
- step 10305 of determining the color information and the existence probability of the projection point
- the image generation method of the embodiment 3-1 for example, as shown in Fig. 54, first, a plurality of images obtained by photographing a subject while changing the focusing distance are obtained (step 101). At this time, the image to be acquired may be a color image or a black and white image. In the embodiment 3-1, each point (pixel) on the image is red (R), green (G). The description assumes that a color image represented by color information using the three primary colors of blue and blue (B) is acquired.
- a position (virtual viewpoint) at which the observer views the subject is set (step 102).
- a three-dimensional shape of the subject is obtained using the obtained image of the subject (step 103). Then, when the three-dimensional shape of the subject is acquired, an image of the virtual viewpoint when the subject is viewed is generated (step 104).
- a projection plane L (je J ⁇ ⁇ 1, 2,..., M ⁇ ) of a multilayer structure is set (step 10301).
- the projection plane L for example, a projection plane having a planar shape is set in parallel as shown in FIG.
- the installation interval of the projection plane is equal to the focal distance of each image acquired in step 101, as shown in FIG. 44, for example. Les ,.
- a viewpoint (reference viewpoint) R to be used as a reference when obtaining the probability that the surface of the object is present on the projection point is determined.
- the reference viewpoint R may be the same point as the virtual viewpoint P, or may be a different point.
- the position of the center of gravity may be used.
- a projection point sequence consisting of a set of projection points on a straight line passing through the reference viewpoint R, a point on the image corresponding to the projection point (corresponding point), and the like are set (step 10303).
- the projection point sequence is, for example, as shown in FIG. 13, a straight line passing through the reference viewpoint R and the projection surface. Defined as a collection of intersections (projection points).
- an array for holding an image to be texture-mapped on each projection plane is secured, for example, on a memory of the device that generates the image (step 10304).
- the reserved array has, for example, texture information corresponding to the position of the projection point, color information (R, G, ⁇ ) and existence probability information of 8 bits for each pixel.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005505046A JP4052331B2 (ja) | 2003-06-20 | 2004-06-18 | 仮想視点画像生成方法及び3次元画像表示方法並びに装置 |
US10/561,344 US7538774B2 (en) | 2003-06-20 | 2004-06-18 | Virtual visual point image generating method and 3-d image display method and device |
EP04746140.5A EP1646012B1 (en) | 2003-06-20 | 2004-06-18 | Virtual visual point image generating method and 3-d image display method and device |
Applications Claiming Priority (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003176778 | 2003-06-20 | ||
JP2003-176778 | 2003-06-20 | ||
JP2004-016559 | 2004-01-26 | ||
JP2004-016831 | 2004-01-26 | ||
JP2004016832 | 2004-01-26 | ||
JP2004016559 | 2004-01-26 | ||
JP2004016551 | 2004-01-26 | ||
JP2004-016832 | 2004-01-26 | ||
JP2004016831 | 2004-01-26 | ||
JP2004-016551 | 2004-01-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004114224A1 true WO2004114224A1 (ja) | 2004-12-29 |
Family
ID=33545604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/008638 WO2004114224A1 (ja) | 2003-06-20 | 2004-06-18 | 仮想視点画像生成方法及び3次元画像表示方法並びに装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US7538774B2 (ja) |
EP (1) | EP1646012B1 (ja) |
JP (1) | JP4052331B2 (ja) |
KR (1) | KR100779634B1 (ja) |
WO (1) | WO2004114224A1 (ja) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006170993A (ja) * | 2004-12-10 | 2006-06-29 | Microsoft Corp | 非同期画像部分をマッチングすること |
JP2006352539A (ja) * | 2005-06-16 | 2006-12-28 | Sharp Corp | 広視野映像システム |
JP2009025225A (ja) * | 2007-07-23 | 2009-02-05 | Fujifilm Corp | 立体撮像装置および立体撮像装置の制御方法並びにプログラム |
JP2010039501A (ja) * | 2008-07-31 | 2010-02-18 | Kddi Corp | 3次元移動の自由視点映像生成方法および記録媒体 |
JP2010140097A (ja) * | 2008-12-09 | 2010-06-24 | Nippon Telegr & Teleph Corp <Ntt> | 画像生成方法、画像認証方法、画像生成装置、画像認証装置、プログラム、および記録媒体 |
JP2010177741A (ja) * | 2009-01-27 | 2010-08-12 | Olympus Corp | 撮像装置 |
JP2010225092A (ja) * | 2009-03-25 | 2010-10-07 | Fujitsu Ltd | 画像処理装置、及び画像処理システム |
WO2011105297A1 (ja) * | 2010-02-23 | 2011-09-01 | 日本電信電話株式会社 | 動きベクトル推定方法、多視点映像符号化方法、多視点映像復号方法、動きベクトル推定装置、多視点映像符号化装置、多視点映像復号装置、動きベクトル推定プログラム、多視点映像符号化プログラム、及び多視点映像復号プログラム |
JP2011210246A (ja) * | 2010-03-12 | 2011-10-20 | Mitsubishi Electric Research Laboratories Inc | ステレオ画像においてオクルージョンをハンドリングするための方法 |
CN102271271A (zh) * | 2011-08-17 | 2011-12-07 | 清华大学 | 生成多视点视频的装置及方法 |
JP2012173858A (ja) * | 2011-02-18 | 2012-09-10 | Nippon Telegr & Teleph Corp <Ntt> | 全方位画像生成方法、画像生成装置およびプログラム |
JP2012253706A (ja) * | 2011-06-07 | 2012-12-20 | Konica Minolta Holdings Inc | 画像処理装置、そのプログラム、画像処理システム、および画像処理方法 |
JP2014155069A (ja) * | 2013-02-08 | 2014-08-25 | Nippon Telegr & Teleph Corp <Ntt> | 画像生成装置、画像生成方法及びプログラム |
JP2015087851A (ja) * | 2013-10-29 | 2015-05-07 | 日本電信電話株式会社 | 画像処理装置及び画像処理プログラム |
JP2016126425A (ja) * | 2014-12-26 | 2016-07-11 | Kddi株式会社 | 自由視点画像生成装置、方法およびプログラム |
JP2018042237A (ja) * | 2016-08-31 | 2018-03-15 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
JP2018514031A (ja) * | 2015-05-13 | 2018-05-31 | グーグル エルエルシー | DeepStereo:実世界の画像から新たなビューを予測するための学習 |
WO2018186279A1 (ja) * | 2017-04-04 | 2018-10-11 | シャープ株式会社 | 画像処理装置、画像処理プログラム、及び記録媒体 |
KR20190140396A (ko) * | 2018-06-11 | 2019-12-19 | 캐논 가부시끼가이샤 | 화상 처리장치, 방법 및 기억매체 |
Families Citing this family (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7375854B2 (en) * | 2004-03-12 | 2008-05-20 | Vastview Technology, Inc. | Method for color correction |
EP1607064B1 (en) | 2004-06-17 | 2008-09-03 | Cadent Ltd. | Method and apparatus for colour imaging a three-dimensional structure |
JP4328311B2 (ja) * | 2005-04-14 | 2009-09-09 | 株式会社東芝 | 三次元画像表示用多視点画像の作成方法およびプログラム |
US8194168B2 (en) * | 2005-06-03 | 2012-06-05 | Mediapod Llc | Multi-dimensional imaging system and method |
JP4488996B2 (ja) * | 2005-09-29 | 2010-06-23 | 株式会社東芝 | 多視点画像作成装置、多視点画像作成方法および多視点画像作成プログラム |
KR101195942B1 (ko) * | 2006-03-20 | 2012-10-29 | 삼성전자주식회사 | 카메라 보정 방법 및 이를 이용한 3차원 물체 재구성 방법 |
US7768527B2 (en) * | 2006-05-31 | 2010-08-03 | Beihang University | Hardware-in-the-loop simulation system and method for computer vision |
KR100811954B1 (ko) * | 2006-07-13 | 2008-03-10 | 현대자동차주식회사 | 물체상 표시방법 |
US7893945B2 (en) * | 2006-08-21 | 2011-02-22 | Texas Instruments Incorporated | Color mapping techniques for color imaging devices |
JP4926826B2 (ja) * | 2007-05-25 | 2012-05-09 | キヤノン株式会社 | 情報処理方法および情報処理装置 |
EP2076055B1 (en) * | 2007-12-27 | 2012-10-24 | Saab AB | Method for displaying a virtual image |
JP4694581B2 (ja) * | 2008-01-25 | 2011-06-08 | 富士フイルム株式会社 | ファイル生成装置および方法、3次元形状再生装置および方法並びにプログラム |
DE102008020579B4 (de) * | 2008-04-24 | 2014-07-31 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Verfahren zur automatischen Objektlageerkennung und Bewegung einer Vorrichtung relativ zu einem Objekt |
KR100957129B1 (ko) * | 2008-06-12 | 2010-05-11 | 성영석 | 영상 변환 방법 및 장치 |
KR100943218B1 (ko) * | 2008-06-30 | 2010-02-18 | 한국 한의학 연구원 | 컬러 교정을 이용한 3차원 모델 생성 방법 |
WO2010038409A1 (ja) * | 2008-09-30 | 2010-04-08 | パナソニック株式会社 | 再生装置、記録媒体、及び集積回路 |
US8479015B2 (en) * | 2008-10-17 | 2013-07-02 | Oracle International Corporation | Virtual image management |
KR20100051359A (ko) * | 2008-11-07 | 2010-05-17 | 삼성전자주식회사 | 영상 데이터 생성 방법 및 장치 |
JP2010122879A (ja) * | 2008-11-19 | 2010-06-03 | Sony Ericsson Mobile Communications Ab | 端末装置、表示制御方法および表示制御プログラム |
US8698799B2 (en) * | 2009-01-20 | 2014-04-15 | Adobe Systems Incorporated | Method and apparatus for rendering graphics using soft occlusion |
JP5249114B2 (ja) * | 2009-04-03 | 2013-07-31 | Kddi株式会社 | 画像生成装置、方法及びプログラム |
FR2945491A1 (fr) * | 2009-05-18 | 2010-11-19 | Peugeot Citroen Automobiles Sa | Procede et dispositif pour etendre une zone de visibilite |
WO2011105337A1 (ja) * | 2010-02-24 | 2011-09-01 | 日本電信電話株式会社 | 多視点映像符号化方法、多視点映像復号方法、多視点映像符号化装置、多視点映像復号装置、及びプログラム |
JP4712898B1 (ja) * | 2010-03-05 | 2011-06-29 | 健治 吉田 | 中間画像生成方法、中間画像ファイル、中間画像生成装置、立体画像生成方法、立体画像生成装置、裸眼立体画像表示装置、立体画像生成システム |
US9224240B2 (en) * | 2010-11-23 | 2015-12-29 | Siemens Medical Solutions Usa, Inc. | Depth-based information layering in medical diagnostic ultrasound |
JP5858381B2 (ja) * | 2010-12-03 | 2016-02-10 | 国立大学法人名古屋大学 | 多視点画像合成方法及び多視点画像合成システム |
JP2012134885A (ja) * | 2010-12-22 | 2012-07-12 | Sony Corp | 画像処理装置及び画像処理方法 |
KR101852811B1 (ko) * | 2011-01-05 | 2018-04-27 | 엘지전자 주식회사 | 영상표시 장치 및 그 제어방법 |
KR101789071B1 (ko) * | 2011-01-13 | 2017-10-24 | 삼성전자주식회사 | 깊이 영상의 특징 추출 방법 및 장치 |
JPWO2012147363A1 (ja) * | 2011-04-28 | 2014-07-28 | パナソニック株式会社 | 画像生成装置 |
CN102271268B (zh) * | 2011-08-09 | 2013-04-10 | 清华大学 | 多视点立体视频的深度序列生成方法和装置 |
JP6021489B2 (ja) * | 2011-10-03 | 2016-11-09 | キヤノン株式会社 | 撮像装置、画像処理装置およびその方法 |
US9007373B2 (en) | 2011-10-12 | 2015-04-14 | Yale University | Systems and methods for creating texture exemplars |
US9799136B2 (en) | 2011-12-21 | 2017-10-24 | Twentieth Century Fox Film Corporation | System, method and apparatus for rapid film pre-visualization |
AU2012359062A1 (en) * | 2011-12-21 | 2014-07-10 | Twentieth Century Fox Film Corporation | System, method and apparatus for rapid film pre-visualization |
CN104010560A (zh) * | 2011-12-21 | 2014-08-27 | 皇家飞利浦有限公司 | 来自体积模态的结构到未校准内窥镜的视频上的叠加与运动补偿 |
US11232626B2 (en) | 2011-12-21 | 2022-01-25 | Twenieth Century Fox Film Corporation | System, method and apparatus for media pre-visualization |
JP5777507B2 (ja) * | 2011-12-27 | 2015-09-09 | キヤノン株式会社 | 情報処理装置、情報処理方法、及びそのプログラム |
US8692840B2 (en) * | 2012-02-05 | 2014-04-08 | Mitsubishi Electric Research Laboratories, Inc. | Method for modeling and estimating rendering errors in virtual images |
JP6017795B2 (ja) * | 2012-02-10 | 2016-11-02 | 任天堂株式会社 | ゲームプログラム、ゲーム装置、ゲームシステム、およびゲーム画像生成方法 |
US9354748B2 (en) | 2012-02-13 | 2016-05-31 | Microsoft Technology Licensing, Llc | Optical stylus interaction |
US9153062B2 (en) * | 2012-02-29 | 2015-10-06 | Yale University | Systems and methods for sketching and imaging |
US9460029B2 (en) | 2012-03-02 | 2016-10-04 | Microsoft Technology Licensing, Llc | Pressure sensitive keys |
US9075566B2 (en) | 2012-03-02 | 2015-07-07 | Microsoft Technoogy Licensing, LLC | Flexible hinge spine |
US9149309B2 (en) | 2012-03-23 | 2015-10-06 | Yale University | Systems and methods for sketching designs in context |
JP2015511520A (ja) | 2012-03-26 | 2015-04-20 | コーニンクレッカ フィリップス エヌ ヴェ | X線焦点スポット運動の直接制御 |
US20130300590A1 (en) | 2012-05-14 | 2013-11-14 | Paul Henry Dietz | Audio Feedback |
US9256089B2 (en) | 2012-06-15 | 2016-02-09 | Microsoft Technology Licensing, Llc | Object-detecting backlight unit |
WO2014014928A2 (en) * | 2012-07-18 | 2014-01-23 | Yale University | Systems and methods for three-dimensional sketching and imaging |
US20140063198A1 (en) * | 2012-08-30 | 2014-03-06 | Microsoft Corporation | Changing perspectives of a microscopic-image device based on a viewer' s perspective |
JP6201379B2 (ja) * | 2013-04-02 | 2017-09-27 | 富士通株式会社 | 位置算出システム、位置算出プログラム、および、位置算出方法 |
JP2014230251A (ja) * | 2013-05-27 | 2014-12-08 | ソニー株式会社 | 画像処理装置、および画像処理方法 |
KR101980275B1 (ko) | 2014-04-14 | 2019-05-20 | 삼성전자주식회사 | 다시점 영상 디스플레이 장치 및 디스플레이 방법 |
CN103974049B (zh) * | 2014-04-28 | 2015-12-02 | 京东方科技集团股份有限公司 | 一种穿戴式投影装置及投影方法 |
EP3101892A4 (en) | 2014-05-21 | 2017-07-26 | Sony Corporation | Image processing apparatus and method |
JP2016019194A (ja) * | 2014-07-09 | 2016-02-01 | 株式会社東芝 | 画像処理装置、画像処理方法、および画像投影装置 |
US9675430B2 (en) | 2014-08-15 | 2017-06-13 | Align Technology, Inc. | Confocal imaging apparatus with curved focal surface |
US10033992B1 (en) * | 2014-09-09 | 2018-07-24 | Google Llc | Generating a 3D video of an event using crowd sourced data |
JP6528540B2 (ja) * | 2015-05-28 | 2019-06-12 | カシオ計算機株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP6597041B2 (ja) * | 2015-08-18 | 2019-10-30 | 富士ゼロックス株式会社 | サーバー装置及び情報処理システム |
KR102459850B1 (ko) * | 2015-12-03 | 2022-10-27 | 삼성전자주식회사 | 3d 이미지 처리 방법 및 장치, 및 그래픽 처리 장치 |
US11222461B2 (en) | 2016-03-25 | 2022-01-11 | Outward, Inc. | Arbitrary view generation |
US11232627B2 (en) | 2016-03-25 | 2022-01-25 | Outward, Inc. | Arbitrary view generation |
US10163249B2 (en) | 2016-03-25 | 2018-12-25 | Outward, Inc. | Arbitrary view generation |
US9996914B2 (en) | 2016-03-25 | 2018-06-12 | Outward, Inc. | Arbitrary view generation |
US10163251B2 (en) | 2016-03-25 | 2018-12-25 | Outward, Inc. | Arbitrary view generation |
US10163250B2 (en) | 2016-03-25 | 2018-12-25 | Outward, Inc. | Arbitrary view generation |
US10559095B2 (en) * | 2016-08-31 | 2020-02-11 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and medium |
JP2018036898A (ja) * | 2016-08-31 | 2018-03-08 | キヤノン株式会社 | 画像処理装置及びその制御方法 |
JP6434947B2 (ja) * | 2016-09-30 | 2018-12-05 | キヤノン株式会社 | 撮像システム、画像処理装置、画像処理方法、及びプログラム |
JP6419128B2 (ja) | 2016-10-28 | 2018-11-07 | キヤノン株式会社 | 画像処理装置、画像処理システム、画像処理方法及びプログラム |
KR102534875B1 (ko) * | 2016-12-08 | 2023-05-22 | 한국전자통신연구원 | 카메라 어레이와 다중 초점 영상을 이용하여 임의 시점의 영상을 생성하는 방법 및 장치 |
KR102139106B1 (ko) * | 2016-12-16 | 2020-07-29 | 주식회사 케이티 | 타임 슬라이스 영상을 제공하는 장치 및 사용자 단말 |
EP3565259A1 (en) * | 2016-12-28 | 2019-11-06 | Panasonic Intellectual Property Corporation of America | Three-dimensional model distribution method, three-dimensional model receiving method, three-dimensional model distribution device, and three-dimensional model receiving device |
JP6871801B2 (ja) * | 2017-05-11 | 2021-05-12 | キヤノン株式会社 | 画像処理装置、画像処理方法、情報処理装置、撮像装置および画像処理システム |
JP2019135617A (ja) * | 2018-02-05 | 2019-08-15 | キヤノン株式会社 | 情報処理装置およびその制御方法、画像処理システム |
JP7140517B2 (ja) * | 2018-03-09 | 2022-09-21 | キヤノン株式会社 | 生成装置、生成装置が行う生成方法、及びプログラム |
JP7062506B2 (ja) * | 2018-05-02 | 2022-05-06 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP7123736B2 (ja) * | 2018-10-23 | 2022-08-23 | キヤノン株式会社 | 画像処理装置、画像処理方法、およびプログラム |
JP2020134973A (ja) * | 2019-02-12 | 2020-08-31 | キヤノン株式会社 | 素材生成装置、画像生成装置および画像処理装置 |
CN111311731B (zh) * | 2020-01-23 | 2023-04-07 | 深圳市易尚展示股份有限公司 | 基于数字投影的随机灰度图生成方法、装置和计算机设备 |
JPWO2021149509A1 (ja) * | 2020-01-23 | 2021-07-29 | ||
JP2022102698A (ja) * | 2020-12-25 | 2022-07-07 | 武漢天馬微電子有限公司 | 立体画像表示装置 |
KR102558294B1 (ko) | 2020-12-31 | 2023-07-24 | 한국과학기술연구원 | 임의 시점 영상 생성 기술을 이용한 다이나믹 영상 촬영 장치 및 방법 |
CN113096254B (zh) * | 2021-04-23 | 2023-09-22 | 北京百度网讯科技有限公司 | 目标物渲染方法及装置、计算机设备和介质 |
KR102589623B1 (ko) * | 2022-01-27 | 2023-10-16 | 계명대학교 산학협력단 | 360 캠을 이용한 스테레오 영상 생성 장치 및 방법 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3022558B1 (ja) | 1998-05-21 | 2000-03-21 | 日本電信電話株式会社 | 三次元表示方法及び装置 |
JP2000350237A (ja) * | 1999-06-08 | 2000-12-15 | Nippon Telegr & Teleph Corp <Ntt> | 三次元表示方法及び装置 |
JP2001291116A (ja) * | 2000-04-11 | 2001-10-19 | Sony Corp | 三次元画像生成装置および三次元画像生成方法、並びにプログラム提供媒体 |
US6466207B1 (en) | 1998-03-18 | 2002-10-15 | Microsoft Corporation | Real-time image rendering with layered depth images |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2923063B2 (ja) | 1991-02-19 | 1999-07-26 | 日本電信電話株式会社 | 多視点ステレオ画像計測方法 |
US6014163A (en) | 1997-06-09 | 2000-01-11 | Evans & Sutherland Computer Corporation | Multi-camera virtual set system employing still store frame buffers for each camera |
JPH11351843A (ja) | 1998-06-10 | 1999-12-24 | Takeomi Suzuki | 3次元画像生成の装置および方法 |
JP3593466B2 (ja) | 1999-01-21 | 2004-11-24 | 日本電信電話株式会社 | 仮想視点画像生成方法およびその装置 |
JP3561446B2 (ja) | 1999-08-25 | 2004-09-02 | 日本電信電話株式会社 | 画像生成方法及びその装置 |
GB2354389A (en) | 1999-09-15 | 2001-03-21 | Sharp Kk | Stereo images with comfortable perceived depth |
US7065242B2 (en) * | 2000-03-28 | 2006-06-20 | Viewpoint Corporation | System and method of three-dimensional image capture and modeling |
US7085409B2 (en) * | 2000-10-18 | 2006-08-01 | Sarnoff Corporation | Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery |
US7006085B1 (en) * | 2000-10-30 | 2006-02-28 | Magic Earth, Inc. | System and method for analyzing and imaging three-dimensional volume data sets |
JP2003099800A (ja) | 2001-09-25 | 2003-04-04 | Nippon Telegr & Teleph Corp <Ntt> | 3次元画像情報生成方法および装置、ならびに3次元画像情報生成プログラムとこのプログラムを記録した記録媒体 |
KR100439756B1 (ko) * | 2002-01-09 | 2004-07-12 | 주식회사 인피니트테크놀로지 | 3차원 가상내시경 화면 표시장치 및 그 방법 |
JP2003163929A (ja) | 2002-08-23 | 2003-06-06 | Mitsubishi Electric Corp | 映像監視装置 |
US7800628B2 (en) * | 2006-06-16 | 2010-09-21 | Hewlett-Packard Development Company, L.P. | System and method for generating scale maps |
-
2004
- 2004-06-18 JP JP2005505046A patent/JP4052331B2/ja not_active Expired - Fee Related
- 2004-06-18 KR KR1020057024444A patent/KR100779634B1/ko active IP Right Grant
- 2004-06-18 EP EP04746140.5A patent/EP1646012B1/en active Active
- 2004-06-18 WO PCT/JP2004/008638 patent/WO2004114224A1/ja active Application Filing
- 2004-06-18 US US10/561,344 patent/US7538774B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6466207B1 (en) | 1998-03-18 | 2002-10-15 | Microsoft Corporation | Real-time image rendering with layered depth images |
JP3022558B1 (ja) | 1998-05-21 | 2000-03-21 | 日本電信電話株式会社 | 三次元表示方法及び装置 |
JP2000350237A (ja) * | 1999-06-08 | 2000-12-15 | Nippon Telegr & Teleph Corp <Ntt> | 三次元表示方法及び装置 |
JP2001291116A (ja) * | 2000-04-11 | 2001-10-19 | Sony Corp | 三次元画像生成装置および三次元画像生成方法、並びにプログラム提供媒体 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1646012A4 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006170993A (ja) * | 2004-12-10 | 2006-06-29 | Microsoft Corp | 非同期画像部分をマッチングすること |
JP2006352539A (ja) * | 2005-06-16 | 2006-12-28 | Sharp Corp | 広視野映像システム |
JP2009025225A (ja) * | 2007-07-23 | 2009-02-05 | Fujifilm Corp | 立体撮像装置および立体撮像装置の制御方法並びにプログラム |
US8259160B2 (en) | 2008-07-31 | 2012-09-04 | Kddi Corporation | Method for generating free viewpoint video image in three-dimensional movement and recording medium |
JP2010039501A (ja) * | 2008-07-31 | 2010-02-18 | Kddi Corp | 3次元移動の自由視点映像生成方法および記録媒体 |
JP2010140097A (ja) * | 2008-12-09 | 2010-06-24 | Nippon Telegr & Teleph Corp <Ntt> | 画像生成方法、画像認証方法、画像生成装置、画像認証装置、プログラム、および記録媒体 |
JP2010177741A (ja) * | 2009-01-27 | 2010-08-12 | Olympus Corp | 撮像装置 |
JP2010225092A (ja) * | 2009-03-25 | 2010-10-07 | Fujitsu Ltd | 画像処理装置、及び画像処理システム |
WO2011105297A1 (ja) * | 2010-02-23 | 2011-09-01 | 日本電信電話株式会社 | 動きベクトル推定方法、多視点映像符号化方法、多視点映像復号方法、動きベクトル推定装置、多視点映像符号化装置、多視点映像復号装置、動きベクトル推定プログラム、多視点映像符号化プログラム、及び多視点映像復号プログラム |
JP5237500B2 (ja) * | 2010-02-23 | 2013-07-17 | 日本電信電話株式会社 | 動きベクトル推定方法、多視点映像符号化方法、多視点映像復号方法、動きベクトル推定装置、多視点映像符号化装置、多視点映像復号装置、動きベクトル推定プログラム、多視点映像符号化プログラム、及び多視点映像復号プログラム |
JP2011210246A (ja) * | 2010-03-12 | 2011-10-20 | Mitsubishi Electric Research Laboratories Inc | ステレオ画像においてオクルージョンをハンドリングするための方法 |
JP2012173858A (ja) * | 2011-02-18 | 2012-09-10 | Nippon Telegr & Teleph Corp <Ntt> | 全方位画像生成方法、画像生成装置およびプログラム |
JP2012253706A (ja) * | 2011-06-07 | 2012-12-20 | Konica Minolta Holdings Inc | 画像処理装置、そのプログラム、画像処理システム、および画像処理方法 |
CN102271271A (zh) * | 2011-08-17 | 2011-12-07 | 清华大学 | 生成多视点视频的装置及方法 |
JP2014155069A (ja) * | 2013-02-08 | 2014-08-25 | Nippon Telegr & Teleph Corp <Ntt> | 画像生成装置、画像生成方法及びプログラム |
JP2015087851A (ja) * | 2013-10-29 | 2015-05-07 | 日本電信電話株式会社 | 画像処理装置及び画像処理プログラム |
JP2016126425A (ja) * | 2014-12-26 | 2016-07-11 | Kddi株式会社 | 自由視点画像生成装置、方法およびプログラム |
JP2018514031A (ja) * | 2015-05-13 | 2018-05-31 | グーグル エルエルシー | DeepStereo:実世界の画像から新たなビューを予測するための学習 |
JP2018042237A (ja) * | 2016-08-31 | 2018-03-15 | キヤノン株式会社 | 画像処理装置、画像処理方法、及びプログラム |
WO2018186279A1 (ja) * | 2017-04-04 | 2018-10-11 | シャープ株式会社 | 画像処理装置、画像処理プログラム、及び記録媒体 |
KR20190140396A (ko) * | 2018-06-11 | 2019-12-19 | 캐논 가부시끼가이샤 | 화상 처리장치, 방법 및 기억매체 |
US11227429B2 (en) | 2018-06-11 | 2022-01-18 | Canon Kabushiki Kaisha | Image processing apparatus, method and storage medium for generating a virtual viewpoint with reduced image data |
KR102412242B1 (ko) * | 2018-06-11 | 2022-06-23 | 캐논 가부시끼가이샤 | 화상 처리장치, 방법 및 기억매체 |
Also Published As
Publication number | Publication date |
---|---|
JP4052331B2 (ja) | 2008-02-27 |
KR20060029140A (ko) | 2006-04-04 |
JPWO2004114224A1 (ja) | 2006-07-20 |
KR100779634B1 (ko) | 2007-11-26 |
EP1646012A1 (en) | 2006-04-12 |
US7538774B2 (en) | 2009-05-26 |
EP1646012B1 (en) | 2016-04-13 |
US20070122027A1 (en) | 2007-05-31 |
EP1646012A4 (en) | 2012-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004114224A1 (ja) | 仮想視点画像生成方法及び3次元画像表示方法並びに装置 | |
JP4828506B2 (ja) | 仮想視点画像生成装置、プログラムおよび記録媒体 | |
US11189043B2 (en) | Image reconstruction for virtual 3D | |
US20130335535A1 (en) | Digital 3d camera using periodic illumination | |
US20170374344A1 (en) | Discontinuity-aware reprojection | |
WO2008029345A1 (en) | Method for determining a depth map from images, device for determining a depth map | |
CN104395931A (zh) | 图像的深度图的生成 | |
CN101631257A (zh) | 一种实现二维视频码流立体播放的方法及装置 | |
Wei | Converting 2d to 3d: A survey | |
JP6585938B2 (ja) | 立体像奥行き変換装置およびそのプログラム | |
CN113763301B (zh) | 一种减小错切概率的三维图像合成方法和装置 | |
CN107534731B (zh) | 图像处理装置和图像处理方法 | |
CN100573595C (zh) | 虚拟视点图像生成方法和三维图像显示方法及装置 | |
US20220222842A1 (en) | Image reconstruction for virtual 3d | |
WO2022063260A1 (zh) | 一种渲染方法、装置及设备 | |
AU2004306226A1 (en) | Stereoscopic imaging | |
JP3629243B2 (ja) | モデリング時の距離成分を用いてレンダリング陰影処理を行う画像処理装置とその方法 | |
JP2001118074A (ja) | 3次元画像作成方法、3次元画像作成装置及びプログラム記録媒体 | |
KR100927234B1 (ko) | 깊이 정보 생성 방법, 그 장치 및 그 방법을 실행하는프로그램이 기록된 기록매체 | |
JP2018129026A (ja) | 決定装置、画像処理装置、決定方法及び決定プログラム | |
Galabov | A real time 2D to 3D image conversion techniques | |
Lechlek et al. | Interactive hdr image-based rendering from unstructured ldr photographs | |
Jeong et al. | Real‐Time Defocus Rendering With Level of Detail and Sub‐Sample Blur | |
US11615574B2 (en) | System and method for rendering 6 degree-of-freedom virtual reality | |
WO2023109582A1 (zh) | 处理光线数据的方法、装置、设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2005505046 Country of ref document: JP |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004746140 Country of ref document: EP Ref document number: 2007122027 Country of ref document: US Ref document number: 2004817333X Country of ref document: CN Ref document number: 1020057024444 Country of ref document: KR Ref document number: 10561344 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057024444 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2004746140 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10561344 Country of ref document: US |