WO2002069277A1 - Systeme d'affichage d'image et procede associe - Google Patents

Systeme d'affichage d'image et procede associe Download PDF

Info

Publication number
WO2002069277A1
WO2002069277A1 PCT/JP2001/008677 JP0108677W WO02069277A1 WO 2002069277 A1 WO2002069277 A1 WO 2002069277A1 JP 0108677 W JP0108677 W JP 0108677W WO 02069277 A1 WO02069277 A1 WO 02069277A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image display
server
images
viewpoint
Prior art date
Application number
PCT/JP2001/008677
Other languages
English (en)
Japanese (ja)
Inventor
Mikio Terasawa
Original Assignee
Nabla Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nabla Inc. filed Critical Nabla Inc.
Publication of WO2002069277A1 publication Critical patent/WO2002069277A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Definitions

  • the present invention relates to an image display system and method capable of displaying an image of an object three-dimensionally from an arbitrary viewpoint on a computer. Background technology
  • a moving image of an object is photographed in advance and the moving image data is displayed.
  • Etc. exist as its main method.
  • the surroundings, the whole view, and the like are photographed in advance using a video camera or the like, and the image (moving image data) is compressed by a known compression method such as MPEG (Moving Pictures Experts Group).
  • MPEG Motion Pictures Experts Group
  • a user terminal (Hereinafter referred to as a user terminal). That is, the user cannot browse images from viewpoints that have not been captured in advance.
  • the method in (2) has been devised.
  • a point in the 3D space where the object exists is set as the origin in the 3D space, and the distance from the origin or the 3D coordinates (X 3D shape data of the object is created by inputting the coordinates (y-coordinate, y-coordinate, z-coordinate).
  • the distance, coordinates, etc. from the viewpoint specified by the user, projecting this on a two-dimensional plane and displaying it, it is possible to browse the image of the object from the viewpoint specified by the user This is the method that was used.
  • two-dimensional plane data of an object from two viewpoints that is, plane data of a certain object from two directions (for example, photographs)
  • the two viewpoints A method has been devised that automatically creates an interpolated image between the viewpoints (refers to an image created automatically to interpolate the viewpoint from the original image) from the dimensional plane data and displays it.
  • two or more sets of corresponding points in two-dimensional plane data hereinafter referred to as images
  • feature lines representing the features of the image
  • the viewpoint can be moved two-dimensionally. That is, if the set of three viewpoints is set in all directions, the user can browse the image of the object from any desired viewpoint.
  • the invention according to claim 1 is an image display system that automatically creates an image of an object from a viewpoint desired by a user, wherein the image capturing means captures a two-dimensional image of the object from multiple viewpoints.
  • the invention of claim 33 is an image display method for automatically creating an interpolated image of an object from a viewpoint desired by a user in a user terminal capable of transmitting and receiving data via a server and a network.
  • the user terminal is an image display method for automatically creating an interpolated image of an object from a viewpoint desired by the user by receiving the two-dimensional image from the server.
  • the invention according to claim 65 is an image display system that automatically creates the interpolation image on a server that automatically creates and views an interior image of the object from a viewpoint desired by the user,
  • An image display comprising: an image capturing unit that captures a two-dimensional image of the object from a plurality of viewpoints; and an interpolated image creating unit that automatically creates an interpolated image of the object from the desired viewpoint.
  • the invention according to claim 94 is an image display method for automatically creating the interpolated image on a server that automatically creates and views an interior image of a target object from a viewpoint desired by a user
  • the server is an image display method for acquiring a two-dimensional image of the object from a plurality of viewpoints and automatically creating an interpolated image of the object from the desired viewpoint.
  • a two-dimensional image captured in advance on a server can be stored on the same server (however, it means logically the same terminal, and physically the same terminal).
  • the same applies to the following) it is possible to automatically create an interpolated image of a desired viewpoint.
  • the invention according to claim 2 is an image display system that automatically creates an image of an object from a viewpoint desired by a user, and automatically creates an interpolated image of the object from a viewpoint desired by the user.
  • an image display system having a server capable of transmitting and receiving data via a network with a user terminal having a means for creating an interpolated image
  • the server comprises a two-dimensional image of the object from a plurality of viewpoints.
  • An image display system comprising image capturing means for capturing the two-dimensional image to the user terminal.
  • the invention according to claim 34 is a server capable of transmitting and receiving data to and from a user terminal via a network, wherein an interpolated image of an object from a viewpoint desired by the user is automatically processed at the user terminal.
  • An image display method to be created wherein the server captures a two-dimensional image of the object from a plurality of viewpoints, and transmits the two-dimensional image to the user terminal.
  • a two-dimensional image captured in advance on the server is transmitted to the user terminal, and an interpolated image of a viewpoint desired by the user is automatically output on the user terminal. It can be created.
  • the image capturing means includes an image display which defines a common vertex and a plane connecting the vertex with a line with respect to the set of captured 2D images.
  • the invention according to claim 35 wherein the server captures a plurality of the two-dimensional images before transmitting the two-dimensional image to the user terminal, and common to a set of the captured two-dimensional images.
  • This is an image display method that defines the vertices of the above and the surface connecting the vertices with a line.
  • the invention according to claim 95 is the image display method, wherein the server defines a common vertex and a plane connecting the vertex with a line for the set of the captured two-dimensional images.
  • the invention according to claims 4 and 67 is characterized in that the image capturing means defines a common vertex and a plane connecting the vertex with a line for the captured set of two-dimensional images.
  • the edge end point of the object is extracted, the Delaunay triangular designated point is specified in the set of the captured two-dimensional images, and the Delaunay is specified in the one two-dimensional image of the set of the two-dimensional images.
  • a triangle is generated, a modification that satisfies the Epipol constraint is performed from the designated Delaunay triangle, and the modified linear search is performed.
  • This is an image display system that automatically defines the connected surfaces.
  • the server defines a common vertex and a surface connecting the vertex with a line for the set of the captured two-dimensional images,
  • the edge end point of the object is extracted, and a Delaunay triangular designated point is designated in the set of the captured two-dimensional images, and a Delaunay triangle is designated in one of the two-dimensional images in the set of the two-dimensional images.
  • a triangle is generated, a modification that satisfies the Epipole constraint is performed from the designated Delaunay triangle, and the modified linear search is performed, so that a common vertex and the vertex are obtained for the set of 2D images.
  • This is an image display method that automatically defines the surfaces connected by lines.
  • the image capturing means assumes a virtual three-dimensional space coordinate system on the server, and An image display system for setting a viewpoint of the two-dimensional image by associating polar coordinates of a sphere centered on an origin of an inter-coordinate system with a movement amount of an input device in the server.
  • the server captures the two-dimensional image, assuming a virtual three-dimensional space coordinate system on the server,
  • a three-dimensional viewpoint setting when capturing a two-dimensional image can be performed by using a two-dimensional input device such as a mouse. This is possible simply by moving, and it is possible to reduce the burden when capturing two-dimensional images.
  • the internal image creating means receives the two-dimensional image from the server, and performs a predetermined operation on the received two-dimensional image;
  • An image display system comprising: a parallelized image creating unit that automatically creates a parallelized image based on the calculation result; and a texture mapping unit that performs texture mapping on the parallelized image.
  • the invention according to claim 38 wherein the user terminal receives the two-dimensional image from the server and automatically receives the two-dimensional image when automatically creating an interpolated image of the object from the viewpoint desired by the user.
  • An image display method in which a predetermined operation is performed on an image, a parallelized image is automatically created based on a result of the operation, and texture mapping is performed on the parallelized image.
  • An image display system comprising: a parallelized image creating unit; and a texture matching unit that performs texture matching on the parallelized image.
  • a predetermined operation is performed on the two-dimensional image, and a parallelized image is automatically generated based on the calculation result, and the parallelized image is generated.
  • This is an image display method that performs texture matching.
  • the invention according to claims 7 and 70 is characterized in that the initial information calculation means includes an image including a basic matrix operation and an epipole operation for the two-dimensional image set as the predetermined operation. It is a display system.
  • the invention according to claims 39 and 99 is an image display method including, as the predetermined operation, an operation of a basic matrix and an operation of an epipole for the set of two-dimensional images.
  • the invention according to claim 8 is characterized in that the texture mapping means interpolates vertices of the surface defined in the server into the automatically created parallelized image, performs linear interpolation, and calculates vertices of the linearly interpolated surface.
  • This is an image display system that texture maps the pixels inside the surface.
  • the vertices of the surface defined in the server are interpolated into the automatically created parallelized image, linear interpolation is performed, and the linear interpolation is performed.
  • This is an image display method in which pixels inside the surface are texture-mapped within the vertices of the surface.
  • the texture mapping means interpolates vertices of the surface defined above into the automatically created parallelized image, performs linear interpolation, and generates a surface inside the vertices of the linearly interpolated surface.
  • the invention according to claim 100 when performing the texture mapping, in addition, an image display method in which vertices of the defined surface are interpolated into the automatically created parallelized image, linear interpolation is performed, and pixels inside the surface are texture mapped within vertices of the linearly interpolated surface.
  • the invention according to claim 9 is the image display system, wherein the texture mapping means further includes a face selection means for selecting a texture image for all or some of the defined faces.
  • the invention according to claim 41 is an image display method that further performs a process of selecting a texture image for all or a part of the defined surface when performing the texture matching.
  • the invention according to claim 72 is the image display system, wherein the texture mapping means further includes a face selection means for selecting a texture image for all or some of the faces defined above.
  • the invention according to claim 101 is an image display method that further performs a process of selecting a texture image for all or a part of the defined surface when performing the texture matching.
  • the invention of claims 42 and 102 is characterized in that, in the process of selecting the texture image, only one of the surfaces is directed to the viewpoint specified by the user.
  • the surface is selected as a texture image, and when two or more surfaces are facing up, the area of the surface facing up is calculated and the surface having the larger area is selected as the texture image, and / or
  • the brightness of the image A is A
  • the brightness of the image B is B
  • the ratio of the distance from the image B to the viewpoint at the viewpoint between the image A and the image B is d (0 ⁇ d ⁇ 1).
  • the inventions of Claims 10, 42, 73, and 102 can improve the quality of a texture image, and a more natural interior image can be used as a texture. It is possible to do.
  • the invention according to claims 11 and 74 is an image display system in which the surface selection means draws images in order from a surface farther from the epipole.
  • the invention according to claims 43 and 103 is an image display method in which, in the process of selecting the texture image, drawing is performed in order from a surface far from an epipole.
  • the invention according to claims 12 and 75 is the image display system, wherein the texture mapping means further includes contour processing means for performing contour processing on the surface.
  • the invention of claims 44 and 104 is an image display method that further performs contour processing on the surface when performing the texture mapping.
  • the contour processing means sets a point at which a vertex coordinate of the surface is averaged as a center of gravity of the surface, a ridge line passing through the center of gravity of the surface, and being a connection between surface vertices.
  • a ridge offset vector which is an outwardly directed unit vector perpendicular to the plane, is calculated for each face, and the calculated ridge offset vector is averaged to obtain a boundary ridge offset including a vertex.
  • Calculates the vertex offset vector which is the average unit vector of the vector, and consists of the vertices at both ends of the boundary ridge line and the vertices shifted in the offset vector direction from the created inner image.
  • This is an image display system that performs contour processing by displaying the offset plane to be set and then displaying the normal plane.
  • a point at which a vertex coordinate of the surface is averaged is set as a center of gravity of the surface, and the connection between the vertices of the surface passes through the center of gravity of the surface.
  • An edge offset vector which is an outwardly directed unit vector perpendicular to the edge, is calculated for each face, and the calculated edge offset vector is averaged to obtain a boundary edge offset including a vertex.
  • the vertex offset vector which is the average unit vector of the vector, is calculated, and the vertices at both ends of the boundary ridge line and the vertices shifted in the direction of the offset vector from the created inner image are calculated.
  • the outline of the texture image can be sharpened, for example, a stuffed outline. This makes it possible to sharpen the outline of an object for which it is difficult to indicate by a straight line.
  • the invention according to claims 14 and 77 is an image display system in which the texture mapping means further includes a color correction means for performing a color correction process on the surface.
  • the invention according to claims 46 and 106 is an image display method for further performing a color correction process on the surface when performing the above-mentioned testasure mapping.
  • the self-color correcting means sets the color of the texture images 0 and 1 corresponding to the pixel point p to C c C p C ip) and sets the weight of the ridge line to w. Then, this is an image display system that calculates the color to be corrected by calculating the color C (p) to be corrected by Equation 1 and corrects the color.
  • the color of the texture images 0 and 1 corresponding to the pixel point p is C when performing the color correction processing.
  • C (p) and the edge weight are w
  • this is an image display method in which the color to be corrected is calculated by Equation 1 to calculate the color to be corrected, and the color is corrected.
  • a color difference may occur when a different texture image is selected for each adjacent face. This can be eliminated.
  • the invention according to claims 16 and 79 is characterized in that, in the image capturing means, when the three two-dimensional images are set as one set, the final interpolation from a viewpoint desired by the user is performed.
  • the internal image creating means automatically creates a primary internal image by using a combination of the two images in the set of the captured two-dimensional images.
  • Other primary interpolation images are automatically created using two combinations other than the two combinations described above, and the user uses the two automatically created primary interpolation images to generate the primary interpolation image.
  • This is an image display system that automatically creates the final inner image from a desired viewpoint.
  • the invention according to claim 48 is characterized in that, when three sets of the two-dimensional images are set as one set in the server, the user automatically creates a final interpolated image from a viewpoint desired by the user.
  • the terminal uses the combination of the two images in the set of the captured two-dimensional images to obtain the primary internal image. Is automatically created, and another primary interpolation image is automatically created using two combinations other than the two combinations in the pair, and the two primary interpolation images created automatically are created.
  • This is an image display method for automatically creating a final interpolated image from a viewpoint desired by the user using an image.
  • the invention according to claim 108 is characterized in that, when three sets of the two-dimensional images are set as one set in the server, a final interpolated image from a viewpoint desired by the user is automatically created. Then, a primary inner image is automatically created by using the combination of the two images in the set of the captured two-dimensional images, and a combination of two images other than the combination of the two images in the set is used.
  • An image display method that automatically creates another primary interpolated image by using the two primary interpolated images that have been automatically created, and automatically creates a final interpolated image from the viewpoint desired by the user using the two automatically created primary interpolated images is there.
  • the invention according to claims 17 and 80 is characterized in that, in the image capturing means, when three sets of the two-dimensional images are set as one set, an internal image from a viewpoint desired by the user
  • the internal image creating means calculates a rotation matrix using the three captured two-dimensional images, performs parallelization rotation conversion using the rotation matrix, and performs the parallelization rotation.
  • a horizontal transformation is performed based on the transformation
  • a scale transformation is performed based on the horizontal transformation
  • a homography matrix is interpolated based on the parallel transformation, the horizontal transformation, and the scale transformation.
  • the invention according to claims 49 and 109 is characterized in that, when three sets of the two-dimensional images are set as one set in the server, an interpolated image from a viewpoint desired by the user is obtained.
  • the user terminal calculates a rotation matrix using the three captured two-dimensional images, Performing a parallelization rotation conversion using a rotation matrix, performing a horizontal rotation conversion based on the parallelization rotation conversion, performing a scale conversion based on the horizontal rotation conversion, and performing the parallelization rotation conversion and the horizontalization
  • This is an image display method that automatically interpolates an interpolated image by interpolating a homodaraphy matrix based on rotation and scale transformations.
  • the invention according to claim 18 is the image display system, wherein the user terminal further includes a shadow creating unit that creates a shadow on the created interpolation image.
  • the invention according to claim 50 is the image display method, wherein the user terminal performs a process of creating a shadow on the created interpolation image after creating the interpolation image.
  • the invention according to claim 81 is the image display system, wherein the server further includes a shadow creating unit that creates a shadow on the created interpolated image.
  • the invention according to claim 110 is an image display method in which the server, after creating the interpolated image, performs a process of creating a shadow on the created interpolated image.
  • the invention according to claims 19 and 82 is an image display system in which the shadow creating means creates a shadow on an object using an affine transformation matrix.
  • the invention of claims 51 and 11 is an image display method in which the processing of creating a shadow creates a shadow on an object using an affinity transformation matrix.
  • the invention according to claim 20 is the image display system, wherein the user terminal further includes reflection creating means for creating a reflection on the created interpolation image.
  • the invention of claim 52 is the image display method, wherein the user terminal further performs a process of creating a reflection on the created inner image after creating the interpolated image.
  • the server is provided with an image display system further comprising reflection creating means for creating a reflection on the created internal image.
  • the invention according to claim 112 is the image display method, wherein the server further performs a process of creating a reflection on the created interpolation image after creating the interpolation image.
  • the invention of claims 20 and 52 it is possible to add reflection to the created interpolation image, and to create a more realistic interpolation image on the user terminal. It becomes possible.
  • the inventions of claims 83 and 112 it is possible to add reflection to the created interior image, and a more realistic interior image can be obtained from the server that has acquired the two-dimensional image. It can be created on the same server.
  • the invention of claims 21 and 84 is an image display system in which the reflection creating means creates a 'reflection for an object using an affine transformation matrix.
  • the invention according to claims 53 and 113 is an image display method in which the processing of creating the reflection creates a reflection on an object using an affinity transformation matrix.
  • Claim 22 The invention according to Claim 22, wherein the user terminal is configured to recover at least one or more objects by restoring the three-dimensional information from at least one or more inner images, shadows, reflections, and / or two-dimensional images.
  • an image display system further comprising a combining display means for combining the image with one or more images from the same viewpoint.
  • the invention according to claim 54 wherein the user terminal restores the three-dimensional information from at least one or more inner images, shadows, reflections, and / or two-dimensional images, so that at least one or more objects are restored.
  • This is an image display method that further performs a process of synthesizing one or more images from the same viewpoint.
  • the invention according to claim 85 wherein the server restores at least one or more objects by restoring at least one or more of the images, shadows, reflections, and three-dimensional information from the two-dimensional images.
  • An image display system further comprising a combining display means for combining one or more images from the same viewpoint.
  • This is an image display method that further performs a process of combining an object with one or more images from the same viewpoint.
  • the invention of Claims 23 and 86 is characterized in that: This is an image display system that uses projection restoration or Euclidean restoration.
  • the invention of claims 55 and 115 is an image display method using projection restoration or Euclidean restoration as the restoration.
  • the server further includes an automatic continuation unit that sets a route of a viewpoint and a moving speed when performing the automatic continuous display of the interpolation image.
  • the invention according to claims 56 and 1116 is characterized in that the server further comprises: an image display which further performs an automatic continuous process for setting a viewpoint path and a moving speed when performing the automatic continuous display of the interpolated image. Is the way.
  • a virtual three-dimensional space coordinate system is assumed on the server, and the origin of the virtual three-dimensional space coordinate system is determined.
  • This is an image display method in which a route is set by associating polar coordinates of a sphere having a center with a movement amount of an input device in the server.
  • the viewpoint path can be easily set on the server using an input device such as a mouse. It is possible to do.
  • the invention according to claim 26 is the image display system, wherein the user terminal further comprises an automatic continuous means for setting a viewpoint path and a moving speed when performing the automatic continuous display of the interpolated image. is there.
  • the invention of claim 59 is characterized in that, when performing the automatic continuous processing, a virtual three-dimensional space coordinate system is assumed on the user terminal, and polar coordinates of a sphere centered on the virtual three-dimensional space coordinate system origin.
  • This is an image display method for setting a route by associating a movement amount of the input device with the user terminal.
  • the automatic continuation means is an image display for setting a moving speed on the previously set route by associating a moving amount of the input device with time. It is a system.
  • the moving speed of the set route is determined by associating the moving amount of the input device with time. This is an image display method for setting.
  • claims 28, 60, 60, 89, and 118 only the input device such as a mouse is moved without inputting the moving speed of the viewpoint. It is possible to set the moving speed of the viewpoint with.
  • the invention according to claims 29 and 90 is an image display system in which the automatic continuation means sets information indicating a range of an object in the two-dimensional image.
  • the invention according to Claim 61 and Claim 119 is an image display method for setting information indicating a range of an object in the two-dimensional image when performing the automatic continuous processing. It is.
  • the position and size can be corrected.
  • the invention according to claim 30 is characterized in that the user terminal automatically generates and displays an internal image continuously based on a route and a moving speed set on the server or the user terminal.
  • This is an image display system further including a creation unit.
  • the invention according to claim 62 wherein the user terminal automatically creates and displays an interpolated image continuously based on a route and a moving speed set on the server or on the user terminal.
  • This is an image display method that further performs a reproduction process.
  • the invention according to claim 91 is the image display system further comprising an automatic continuous creation means for automatically creating and displaying an interpolated image continuously based on the set route and moving speed. is there.
  • the invention according to claim 120 is an image display method in which the server further performs an automatic continuous reproduction process for automatically creating and displaying a ⁇ ⁇ image continuously based on the set route and moving speed.
  • automatic continuous display can be performed on the user terminal based on the setting of automatic continuous display set on the user terminal on the server.
  • automatic continuous display can be performed on the same server as the server on which automatic continuous display has been set.
  • the image display system further includes a correction processing unit that performs a correction process of a position and a size with respect to the target object.
  • the invention according to claim 63 wherein the user terminal calculates an enlargement / reduction magnification and a translation amount based on the information indicating the range of the object, and corrects the position and the size with respect to the object.
  • This is an image display method for performing the following.
  • the invention according to claim 92 wherein the server calculates an enlargement / reduction ratio and a translation amount based on the information indicating the range of the target object, and corrects the position and size of the target object with respect to the target object.
  • the invention according to claim 122 wherein the user terminal calculates an enlargement / reduction magnification and a translation amount based on the information indicating the range of the object, and corrects a position and a size with respect to the object.
  • This is an image display method that performs processing.
  • correction processing means is: an image display system that calculates a range of an interpolated image of the object by Expression 2 and corrects an internal image within the range. It is.
  • FIG. 1 is a system configuration diagram showing an example of the system configuration of the present invention.
  • FIG. 2 is a diagram showing an example of a system configuration of an interpolated image creating means of the present invention.
  • FIG. 3 is a system configuration diagram showing an example of a system configuration provided with a means for performing automatic continuous display.
  • FIG. 4 is a system configuration diagram showing another example of the system configuration provided with a means for performing automatic continuous display.
  • FIG. 5 is a first page of a flowchart showing an example of the process of the present invention.
  • FIG. 6 is the second page of the flowchart showing an example of the process of the present invention.
  • FIG. 7 FIG. 4 is a flowchart showing an example of an interpolation image creation process.
  • FIG. 8 is a flowchart showing an example of a process of capturing a basic image.
  • FIG. 9 is a flowchart showing an example of a texture mapping process.
  • FIG. 10 is a flowchart showing an example of a surface selection process.
  • FIG. 11 is a flowchart showing an example of a contour processing process.
  • FIG. 12 is a flowchart showing an example of the process of the color correction process.
  • FIG. 12 is a flowchart showing an example of the process of the color correction process.
  • FIG. 13 is a flowchart illustrating an example of a process of a position and size correction process.
  • C FIG. 14 is a flowchart illustrating an example of a process of performing an automatic continuous process.
  • FIG. 15 is a first page of a flowchart showing an example of the process of automatic continuous display.
  • FIG. 16 is the second page of the flowchart showing an example of the process of automatic continuous display.
  • C FIG. 17 is a diagram showing an example of a viewpoint definition screen.
  • FIG. 18 is a conceptual diagram of viewpoint movement.
  • FIG. 19 is a diagram showing another example of the viewpoint definition screen.
  • FIG. 20 is a diagram showing an example of the plane definition screen.
  • FIG. 21 is a conceptual diagram showing the positional relationship of each coordinate system among three viewpoints.
  • FIG. 22 is a conceptual diagram of three-viewpoint interpolation.
  • FIG. 23 is a diagram showing a rotation matrix.
  • Figure 24 is a conceptual diagram of image parallelization.
  • FIG. 25 is a conceptual diagram of plane selection.
  • FIG. 26 is an example of an image with a mask.
  • Fig. 27 is a conceptual diagram of the offset plane.
  • FIG. 28 is a diagram showing the weight of the edge line.
  • FIG. 29 is an example of an image when the composite display is performed.
  • FIG. 30 is a diagram showing a transformation matrix, a rotation matrix, and a shear deformation matrix.
  • FIG. 31 is a diagram showing an affine transformation matrix.
  • FIG. 32 is a conceptual diagram showing a process of calculating a shadow.
  • FIG. 33 is a diagram showing points of reflection.
  • FIG. 33 is a diagram showing points of reflection.
  • FIG. 34 is a diagram showing an example of a basic image.
  • FIG. 35 is a diagram showing an example of an interpolated image.
  • FIG. 36 is a conceptual diagram showing the positional relationship of each coordinate system between two viewpoints.
  • FIG. 37 shows an example of a basic image in which target frame information is set.
  • FIG. 38 is a conceptual diagram of automatic continuous display.
  • FIG. 39 is an image diagram of the correction processing.
  • Fig. 40 shows three viewpoints
  • FIG. 3 is a conceptual diagram showing a positional relationship of each coordinate system in FIG.
  • FIG. 41 is a diagram showing a rotation matrix.
  • FIG. 42 is a diagram showing intermediate formulas in the parallelization rotation conversion and the horizontal rotation conversion.
  • FIG. 43 is a diagram showing an intermediate expression in the scale conversion.
  • FIG. 42 is a diagram showing intermediate formulas in the parallelization rotation conversion and the horizontal rotation conversion.
  • FIG. 44 is a diagram in which edge end points are automatically extracted.
  • FIG. 45 is a diagram when the Delaunay triangle specified point is specified.
  • FIG. 46 is a diagram of a basic image in which a Delaunay triangle is generated.
  • FIG. 47 is a diagram showing simultaneous linear equations for solving scalar parameters.
  • Figure 48 is a conceptual diagram of the epipole constraint line and the search window.
  • FIG. 49 is a diagram showing an algorithm for obtaining a luminance difference.
  • FIG. 50 is a diagram in a case where matching of corresponding points is automatically performed.
  • FIG. 51 is a diagram showing a linear equation for three images.
  • FIG. 52 is a diagram showing an example of a case where the surfaces overlap.
  • FIG. 53 is a conceptual diagram showing the drawing order of surfaces. Explanation of code
  • the user terminal 15 and the server 14 possessed by the user who wish to view the interpolated image of the object from an arbitrary viewpoint are provided by the image display system 1 capable of transmitting and receiving data via the network 16.
  • the network 16 may be any one of an open network such as the Internet, a closed network such as a LAN (Local Network), and an intranet which is a combination thereof.
  • LAN Local Area Network
  • intranet which is a combination thereof.
  • Server 14 has image capturing means 2
  • user terminal 15 has internal image creating means 3
  • shadow creating means 4 reflection creating means 5
  • created image storage means 6 composite display means 7
  • the image capturing means 2 is a means for capturing a two-dimensional image (basic image) obtained by capturing an object from a plurality of viewpoints, and is a means for creating information for image display. It is also a means for transmitting the fetched information to the user terminal 15 via the network 16.
  • the interpolation image creation means 3 is a means for creating an interpolation image based on the basic image and the set viewpoint, and includes an initial information calculation means 9, a parallelization image It has creation means 10 and texture matching means 19.
  • the initial information calculation means 9 is means for calculating a basic matrix and an epipole in each set of basic images required to create an interpolated image.
  • the parallelized image creating means 10 calculates a homography matrix using the epipole calculated by the initial information calculating means 9, and converts the images between the two viewpoints into parallel planes such that common points are aligned on the same scanning line. This is a means to map an image to
  • the texture mapping unit 19 inputs the vertices of the surface defined in the server 14 to the parallelized image created by the parallelized image creation unit 10, performs linear interpolation, and performs linear capture.
  • This is a means for performing texture mapping of pixels inside the surface inside the vertices of the interposed surface, and includes surface selection means 11, contour processing means 12, and color correction means 13.
  • the surface selecting means 11 Rather than selecting an image to be used as a texture from two images, or selecting one of the images and using it as it is, the surface selecting means 11 combines the two images to create a more natural image, and This is a means used as a stirrer.
  • the contour processing means 12 is means for reflecting an image of the details of the contour in the interpolated image when defining a surface for the image of the object. By this means, the object is displayed as a clearer image.
  • the color correction means 13 is means for correcting the color of the image applied to the inner image as a texture. By this means, the color difference between the textures of the object is eliminated.
  • the shadow creating means 4 is a means for adding a shadow to the object. By this means, the reality of the image is improved.
  • the reflection creating means 5 is a means for creating a state in which an object is reflected and reflected on another object. By this means, the reality of the image is improved.
  • the created image storage means 6 is a means for temporarily storing information created and calculated by the interpolated image creating means 3, the shadow creating means 4, and the reflection creating means 5.
  • the combination display means 7 is means for combining the inserted image, the shadow, the reflection, and the like stored in the created image storage means 6 on one image.
  • the correction processing means 8 is means for performing linear interpolation of the position and size of the object for each viewpoint. This is because the distance from the camera to the object and the position of the object in the image may differ for each viewpoint in the basic image. The movement of the object other than the movement occurs, and the object appears to be enlarged or reduced in size, or the object moves up and down without rotating around a certain point. Means.
  • An example of the process of the present invention will be described in detail with reference to the flowcharts shown in FIGS. 5 to 13. In this embodiment, a process of creating an inner image (I s ) at a viewpoint desired by the user from a two-dimensional image (basic image) (I 0 , I x ) of the object from two viewpoints will be described. I do.
  • the person who creates the image performs predetermined procedures from the server 14 owned by the image creator (for example, starting up a computer, starting up software, starting up peripheral devices, etc.), and acquires the image capturing means 2
  • a basic image of the object from multiple viewpoints is taken from the camera (S100). Assuming that the object is located in a virtual three-dimensional space, the position of the viewpoint at which each basic image was captured is adjusted. The process of capturing the basic image will be described later.
  • the common point coordinates are defined as eight or more points (S160).
  • An interpolation image is created for the selected coordinates in a subsequent process.
  • the common point coordinates must be the corresponding points so that even if three arbitrary points are taken, they are not aligned on a straight line. It is preferable that the common point coordinates are defined by an input device (not shown) such as a mouse, but other means may be used.
  • FIG. 20 shows an example of a plane definition screen for defining a plane.
  • the common point coordinates defined in S160 and the plane definition coordinates which are points defining the plane may be the same or different.
  • a point can be selected in one basic image without human intervention as described above, and the corresponding point can be selected in one basic image.
  • the point to be performed may be automatically selected. This process will be described later.
  • the information in S100 to S170 is transmitted from the image capturing means 2 of the server 14 to the interpolated image creating means 3 of the user terminal 15.
  • the user specifies a viewpoint to be viewed by using a mouse or the like (S176).
  • the interpolated image creating means 3 creates an image from the viewpoint specified by the user.
  • an interpolated image is created based on the common point information defined by the image capturing means 2 and the plane definition coordinates (S180).
  • the initial information basic matrix and epipole
  • Figure 36 shows the positional relationship of each coordinate system between the two viewpoints. C for 2 viewpoints. , C or I each of the basic images from two viewpoints. , I i. Basic image I. And 3 x 3 elementary matrix of I and F respectively. , And.
  • a well-known method for example, by RH Hartley, “Apology on 8-point Algorithm”
  • Basic image ⁇ . , I after calculating the basic matrix and the epipole, if the user requests viewing of the object from the specified viewpoint S from the user terminal 15, in order to create an inner image between the two viewpoints Then, a parallelized image (described later) is created based on the information acquired by the image acquisition means 2 and the information calculated by the initial information calculation means 9 (S210).
  • a conventional inner image creating method for example, the example shown in Japanese Patent Application No. 2000-23430
  • an inner image is created through the following process.
  • a surface is defined (surface definition coordinates are defined), and only the vertices of the surface are interpolated on the parallelized image, and then the image inside the surface is texture-filtered.
  • the homography matrix H s (described later) is applied by interpolating the vertices of the surface instead of the end points of the feature line in (4), and texture mapping of the basic image is performed. Obtain an interpolated image.
  • the two basic images must be imaged so that their common points (surface-defined coordinates defined in S170) are aligned on the same scan line.
  • This process is generally called parallelization, and the created image is called a parallelized image.
  • the parallelization of the two basic images is performed by the parallelized image creating means 10 of the inner image creating means 3 using the epipole calculated in S200. Execute (that is, create a parallelized image) (S210). The process of parallelizing two images will be described later.
  • the inner image creating means 3 uses two basic images (I. and I).
  • the created interpolated image is transmitted to the created image storage means 6 as the final interpolated image (final interpolated image) (I s), and is temporarily stored. To be stored.
  • An example of an interpolated image created from the two basic images shown in Figs. 34 (a) and (b) is shown in Fig. 35 c.
  • a shadow for the object is created by the shadow creating means 4 (S240), and the created shadow is transmitted to the created image storage means 6 and temporarily stored.
  • the process of creating a shadow will be described later.
  • the reflection for the object is created by the reflection creating means 5 (S250), and the created reflection is transmitted to the created image storage means 6 and temporarily stored. The process of creating reflection will be described later.
  • the process of creating shadows and reflections in S240 and S250 is in no particular order.
  • the final interpolated image, shadow and reflection created from S180 to S250 and temporarily stored in the created image storage means 6 are transmitted to the composite display means 7, and one of these is transmitted.
  • the synthesis method in S260 can be realized by “restoring” the three-dimensional information from a plurality of images of the object, thereby synthesizing and displaying a plurality of objects from the same viewpoint. This is achieved by using the well-known technique of “projection restoration” or “Euclidean restoration” as the method of “restoration”. By adding this synthesis process, for example, as shown in Fig.
  • correction processing regarding the position and size is executed (S270). This is because the distance between the camera and the target object and the position of the target object in the basic image differ from viewpoint to viewpoint in the basic image captured in S100. If these images are different from each other and the creation of an inner image is performed continuously, the displacement of the object other than the movement of the viewpoint occurs, and the object is apparently enlarged / reduced, or the object This is to solve the problem of moving up and down without rotating around.
  • the position and size correction process will be described later.
  • the created interpolated image is displayed.
  • the correction of the shadow 'reflection, composite display, position, and size in S240 to S270 may or may not be performed entirely or partially.
  • Another embodiment of the present invention will be described in detail with reference to the system configuration diagrams of FIG. 1 and FIG. Create a two-dimensional image (base image) (I., I physician 1 2) in the ⁇ image on the viewpoint that the user hope from (I s) from 3 viewpoints of the object in the present embodiment Explain the process. Further, the description of the same parts as those of the above embodiment will be omitted.
  • the image creator captures at least three or more basic images of the object from the image capturing means 2 on the server 14 (S100).
  • three basic images are considered as one set (in the present embodiment, one set of three images is taken as an example, but even if there are four or more images, a plurality of sets of three images must be provided. (The same can be done by (1)).
  • Eight or more coordinates of positions common to the basic images in the set, that is, common point coordinates are defined (S160). However, the common point coordinates must be such that even if three points are arbitrarily taken, they will not be aligned.
  • an input device not shown
  • a mouse such as a mouse.
  • three basic images are considered as one set.
  • the internal image was created using two basic images as one set.However, three or more basic images should be considered as one set.
  • the problem that the viewpoint in which the internal image can be created is conventionally limited to one dimension can be solved.
  • the point is selected in one of the basic images without human intervention as described above. If you do, in one basic image The corresponding point may be automatically selected. This process will be described later.
  • the information in S100 to S170 is transmitted from the image capturing means 2 to the interpolated image generating means 3.
  • the viewpoint that the user wishes to browse is specified using a mouse or the like (S176).
  • an interpolated image is created based on the common point information defined by the image capturing means 2 and the plane definition coordinates (S180).
  • the initial information (basic matrices and epipoles) required to create the inner image is calculated by the initial information calculation means 9. The process is described below.
  • Fig. 21 shows the positional relationship of each coordinate system among the three viewpoints.
  • C from 3 viewpoints. ,, C 2 , I are the basic images from three viewpoints, respectively. , I 1 2 .
  • Basic image I. And I have basic images I and I 2 , and basic images I 2 and I.
  • the epipole of each basic image is calculated by a known method (S200).
  • Basic image I. And I for a basic image I. Epipol e. 0 Let the epi image of the basic image I be e. Similarly basic for the image I and I 2 the Epiporu basic image I i ei. The basic image I 2 epipol e 2 1 , the basic images I 2 and I. The Epiporu of the basic image I 2 e 2 about. , Basic Image I. Epipol is e 01 . Epipol e. . Is F. T F.
  • the two basic images In order to create an interpolated image from two basic images, the two basic images must first be imaged so that their common points (surface definition coordinates defined in S170) are aligned on the same scan line. This process is generally called parallelization, and the created image is called a parallelized image. In order to perform this parallelization, the parallelization of the two basic images is performed by the parallelized image creating means 10 of the interpolated image creating means 3 using the epipole calculated in S200. Execute (that is, create a parallelized image) (S210) o The process of parallelizing two images will be described later.
  • the texture mapping means 19 interpolates only the surface definition coordinates which are the vertices of the defined surface (S220) ), An interpolated image is created by applying the homography matrix H s and texture matching of the basic image (S230). The texture matting process will be described later. You.
  • two basic images (I., I) are used to create a primary interpolation image (I s0 ).
  • basic image (I., I 2) so that the performing created from other primary in the interpolated image and (I s 2). that is the S 1 8 0 2 times.
  • the final interpolated image (final interpolated image) is created from the two primary interpolated images by the same process as S180.
  • (Insert image) (I s ) is created (ie, the inner image of S180 at this stage is created based on the two primary inner images created earlier).
  • the created final interpolated image (I s ) is transmitted to the created image storage means 6 and temporarily stored. That is, S 180 is performed three times in total.
  • a shadow for the object is created by the shadow creation means 4 (S240) in order to give the final inner image created a reality, and the created shadow is stored in the created image storage means 6. And store it temporarily.
  • the process of creating a shadow will be described later.
  • the reflection for the object is created by the reflection creating means 5 (S250), and the created reflection is transmitted to the created image storage means 6 and temporarily stored.
  • the process of creating reflection will be described later.
  • the process of creating shadows and reflections created in S240 and S250 is in no particular order.
  • the final interpolated image, shadow and reflection created from S180 to S250 and temporarily stored in the created image storage means 6 are transmitted to the composite display means 7, and one of these is transmitted.
  • the synthesis method in S260 can be realized by “restoring” the three-dimensional information from a plurality of images of the object, thereby synthesizing and displaying a plurality of objects from the same viewpoint. This is realized by using the known technique of “projection restoration” or “Euclidean restoration” as the method of “restoration”.
  • correction processing regarding the position and size is executed (S270).
  • S270 correction processing regarding the position and size is executed (S270).
  • the basic image captured in S100 is the distance from the camera to the object.
  • the separation and the position of the object in the basic image differ from viewpoint to viewpoint. If these images are different and the interior image is created continuously, the displacement of the object other than the movement of the viewpoint will occur, and the object will appear enlarged / reduced, or the object This is to solve the problem of moving up and down without rotating around the.
  • the position and size correction process will be described later.
  • an interpolated image (I s ) at a viewpoint desired by the user is created from a two-dimensional image (basic image) (I., I. I 2 ) from three viewpoints of the object.
  • II two-dimensional image
  • Embodiment 2 three or more basic images are taken as one set, a primary internal image is created from two basic images in the set, and a final internal image is created from a plurality of primary internal images. ⁇ I explained the case of going through a two-step process of creating an image.
  • the image creator takes in three basic images of the object from the image taking means 2 on the server 14 (S100).
  • three basic images are set as one set, and coordinates of positions common to the basic images in the set, that is, eight or more common point coordinates are defined (S160) .
  • the common point coordinates must be such that even if three points are arbitrarily selected, they will not be aligned.
  • Fig. 20 shows an example of the surface definition screen for defining a surface.
  • the common point coordinates defined in S160 may be the same as or different from the surface definition coordinates that define the surface.
  • the information in 170 is transmitted from the image capturing means 2 to the internal image creating means 3.
  • the viewpoint that the user desires to browse is specified using a mouse or the like (S176).
  • an interpolated image is created based on the common point information defined by the image capturing means 2 and the plane definition coordinates (S180).
  • initial information basic matrices and epipoles required to create an interpolated image is calculated by the initial information calculating means 9. The process is described below.
  • FIG. 40 shows the positional relationship of each coordinate system among the three viewpoints. C from 3 viewpoints. ,. I C 2 , I base image from 3 viewpoints. , And I physician 1 2.
  • 3 X 3 elementary matrices be F Q and F 2 , respectively.
  • the basic matrix can be calculated from eight or more corresponding points of each basic image set by using a well-known method such as an eight-point algorithm (S190).
  • the epipole of each basic image is calculated by a known method based on the calculated basic matrix (S200).
  • Basic image I. And I for a basic image I. Epipol of e. .
  • the epi-pole of the basic image I is e 1 £)
  • the epi-pole of the basic image I 2 is e 1 l
  • the Epiporu of the basic image I 2 e 2 about.
  • Basic Image I The Epiporu and e 2.
  • Epipol e. . Is F This is the eigenvector corresponding to the minimum eigenvalue of TF0. The same can be calculated for other epipiaus.
  • Basic image I After calculating the basic matrix and the epipole in I 2 or I 2 , the inner image I s is created by using the inner image creating means 3 in parallel with the inner pole image calculated in S 200.
  • the parallelized image creating means 10 parallelizes the three basic images to create a parallelized image (S210). The process of parallelizing the three basic images will be described later.
  • the texture mapping means 19 After creating a parallelized image of the three basic images in S210, the texture mapping means 19 enters only the surface definition coordinates that are the vertices of the defined surface (S22) 0), an inner image is created by applying texture homogenization matrix H s to the basic image (S230).
  • H s texture homogenization matrix
  • the process of texture pine Bing creates an inner ⁇ image (I s)
  • the interpolation image creating means 3 to be described later sends the created image storage unit 6 for storing temporarily.
  • a final interpolated image (I s ) is created from three basic images in the same manner as in the second embodiment.
  • S 18 is obtained from a set of two images. By performing 0 three times, the final interpolated image (I s ) is created.
  • a final internal image is obtained from three basic images at once. (I s ) needs to be created only once.
  • shadows and reflections are created by the shadow creating means 4 and the reflection creating means 5 (S240, S250) in order to give a reality to the created interpolation image. It is transmitted to the storage means 6 and temporarily stored.
  • the details of these processes are the same as those in the above-described embodiment, and thus the details are omitted.
  • the interpolated images, shadows, and reflections temporarily stored in the created image storage means 6 are synthesized by the synthetic display means 7 (S260), and correction processing corresponding to the position and size is performed (S260). 270). Since these processes are the same, the details are omitted. After completion of the correction processing in S270, this image is displayed. As a matter of course, the shadow, reflection, combined display, and correction processing may or may not be performed entirely or partially. Further, as another embodiment of the present invention, an automatic display in which a viewpoint is moved at a predetermined path and a moving speed, an inner image is automatically created, and the images are continuously displayed will be described. The description of the same parts as those of the first, second and third embodiments will be omitted.
  • automatic continuation means 17 for setting the moving speed between the path and the viewpoint is provided on the server 14, and the automatic continuation for performing the automatic continuous display based on the set path and the moving speed is provided.
  • An example in which the creation unit 18 is provided on the user terminal 15 will be described.
  • FIG. 3 shows a system configuration diagram as an example of the system configuration of the image display system 1 at that time.
  • the path and the moving speed are determined in advance in the server 14.
  • the following describes an embodiment in which an interpolated image is automatically created by specifying a, and the viewpoint is moved based on it, and the interpolated image is continuously displayed on the user terminal 15.
  • An example of the process flow is shown in the flowcharts of Figs.
  • a plurality of basic images to be automatically and continuously displayed are captured in the image capturing means 2 (S100).
  • the process of capturing the basic image will be described later.
  • the coordinates of the position common to the basic images in the group that is, the common point coordinates, are defined as eight or more points as a set of two or more of the basic images captured in S100 (S 160).
  • the pair may be a pair of two as shown in the first embodiment, or a pair of three as shown in the second or third embodiment (or a plurality of pairs of three may be provided). good.
  • the corresponding points are selected from the set of basic images, and the points are connected by straight lines to define a plane (S170) (that is, the plane definition coordinates are defined). This is performed for the entire object.
  • the common point coordinates defined in S160 and the plane definition coordinates which define the plane may be the same or different points.
  • the distance from the camera (viewpoint) to the object and the position of the object in the basic image often differ from viewpoint to viewpoint. If automatic continuous display is performed while these are different, displacement of the object other than the movement of the viewpoint occurs, and the object appears to be enlarged / reduced, or the object does not rotate around a certain point and moves up and down. Problems such as movement occur.
  • the object is corrected. (Indicating the top, left, bottom, and right position coordinates in the basic image, hereinafter referred to as target frame information)
  • the inner square indicates the target frame
  • the outer square indicates the viewpoint frame of the basic image.
  • a viewpoint (start viewpoint) to start automatic continuous display, a viewpoint to stop (stop viewpoint), and a viewpoint to end (end viewpoint) are set (S500).
  • this viewpoint it is preferable to use a mouse or the like as an input device (not shown) provided in the server 14.
  • the specific method can be calculated using Equations 3 and 4 (described later) in the same manner as the process in S150.
  • the stop viewpoint the time during which the mouse is stopped at that viewpoint must be measured using a timer function (not shown) provided in the server 14 in advance. Is preferred.
  • a path for automatic continuous display can be set.
  • the moving speed for each pass By setting the moving speed for each pass, the moving speed for each pass can be changed.
  • the moving speed may be input using the keyboard, or the moving speed in the path may be set by associating the moving speed of the mouse with the timer. good.
  • the automatic continuous display process is executed by the automatic continuous means 17 (S505).
  • Fig. 38 shows a conceptual diagram.
  • the current drawing section position is the start or end viewpoint of the viewpoint section. It is determined whether the stop count (stop time) is 0 (S510). Since the first image to be displayed should be the start viewpoint, the stop time is subtracted (S515). During that time, the basic image of the starting viewpoint is displayed.
  • the viewpoint movement is started based on the path defined in the automatic continuation means 17 of the server 14. That is, the process in the case where the drawing is not stopped in S510 is executed.
  • the next drawing section position is calculated from the current drawing section position (S520) c. Naturally, in this case, the drawing section position is decreased or increased depending on the direction of the continuous display path.
  • the interpolated image creating means 3 transmits information on the viewpoint section and the drawing section position to create an inner image (S180).
  • the interpolation image creation process in S180 may be used. Also, if necessary, create a shadow at S260, create a reflection at S270, composite display processing at S280, and position and size at S290. Perform correction processing.
  • next drawing section position exceeds the viewpoint section in S530, the next viewing section is set to the next adjacent viewpoint section and the next drawing section position is set to the initial value (start section position or end section position). Perform (S540).
  • next viewpoint section exceeds the final viewpoint section of the automatic continuous display (S550). If the next viewpoint section does not exceed the final viewpoint section, an interpolated image of S180 is created.
  • the automatic continuous display is a loop display (that is, when the final viewpoint section is reached, the display returns to the start viewpoint section and redisplay is performed) (S5 55), the viewpoint section and the initial drawing section position set in the loop are set (S560).
  • S If the next viewpoint section does not exceed the final viewpoint section in 550, or if the loop display has not been set in S555, an interpolated image is created (S180).
  • the setting of the loop display may be specified in advance when setting a path or the like in the automatic continuation means 17 of the server 14 or may be set in the user terminal 15.
  • a basic image of the object is captured (S 100), and common points and planes are defined (S 160 and S 170), and target frame information Is set (S175).
  • the information from S100 to S175 is transmitted to the user terminal 15, and the viewpoint setting is performed in the automatic continuation means 17 provided in the user terminal 15 (S500).
  • the user uses an input device (not shown) such as a mouse provided in advance on the user terminal 15 to control the moving speed between the path and the viewpoint. Set.
  • the viewpoint it is preferable to move the viewpoint in accordance with the amount of movement of the mouse, as described in the process of capturing the basic image of S100.
  • the specific method can be calculated using Equations 4 and 5 (described later) in the same way as the process in S150.
  • the stop viewpoint the time during which the mouse is stopped at that viewpoint must be measured using a timer (not shown) provided in the user terminal 15 in advance. Is preferred.
  • a path for automatic continuous display can be set.
  • the moving speed for each pass can be changed.
  • FIG. 1 An example of a flowchart showing an example of the flow of the process for capturing the basic image is shown in FIG.
  • Figure 8 shows.
  • the image creator extracts and reads a two-dimensional image taken of a target object using a scanner or the like or previously stored as digital data on a computer (S110).
  • the image creator activates the image capturing means 2 (that is, activates the viewpoint definition screen for performing the viewpoint definition shown in Fig. 17 (a)) to the initial state.
  • the viewpoint definition screen a two-dimensional plane that is a viewpoint is created in a three-dimensional virtual space (this three-dimensional virtual space is assumed to be a sphere with the coordinate system origin in the three-dimensional virtual space) ( S 12 0).
  • the screen image of the viewpoint definition screen in this state is shown in Fig. 17 (b).
  • the image read in S110 is taken in, and the base image is made to correspond to the viewpoint created in S120.
  • the screen image of the viewpoint definition screen in Fig. 17 (c) is shown. Executes position adjustment of the viewpoint such as up, down, left, and right of the basic image loaded in the viewpoint
  • Fig. 17 shows the screen image of the viewpoint definition screen in this state.
  • the viewpoint is moved to input the next viewpoint.
  • the viewpoint position of each image and the line of sight (the direction of the image plane) may be converted into a numerical value and keyed.
  • the image creator it is not possible for the image creator to convert the numerical value into a key. It is complicated. Therefore, as shown in Fig. 18, in the sphere centered on the assumed coordinate system origin in the assumed virtual three-dimensional space, the x, y coordinates on the viewpoint are the polar coordinates ( ⁇ , ⁇ ), polar coordinates ( ⁇ , ⁇ ) are calculated according to the amount of movement of an input device (not shown) such as a mouse, and the viewpoint is moved to that position (S150).
  • Figure 19 (a) shows the screen image where the viewpoint has moved in the three-dimensional virtual space.
  • the polar coordinates ( ⁇ , ⁇ ) are calculated as follows. Assuming that the changes in the x and y coordinates in a unit time are ⁇ ⁇ and Ay, ⁇ can be calculated by equation (4).
  • two or more viewpoints are created by repeating S110 force and S140 again.
  • Screen images in which the second and subsequent viewpoints are created are shown in Fig. 19 (b) to Fig. 19 (d).
  • eight viewpoints are provided in the screen image used in the present embodiment, any number of viewpoints may be used as long as the number of viewpoints is two or more.
  • FIG. 45 An arbitrary pair of points among the extracted edge points is defined as a weak calibration point (hereinafter, Delaunay triangle specified point), and the user specifies it.
  • Fig. 45 Figure 3 shows a diagram when the Delaunay triangle specified point is specified. In FIG. 45, 11 points are specified.
  • Point P. Q is the vertex of the Delaunay triangle containing. . , Q. Not q. Assume 2 .
  • Point p. If is outside the Delaunay triangle, then q. . , Q 01 , q. Defined as 2 .
  • u and v are defined as unknown scalar parameters as follows.
  • the corresponding point is corrected by examining the difference in brightness from the epipole constraint straight line (linear search and Call).
  • the position where the difference in brightness from the search window may be defined around Ask.
  • the brightness may differ even at the corresponding point due to the difference in viewpoint, but it is obvious that the relative error minimum point can be found relatively stably. .
  • the difference in luminance not only the average is used for the entire search window, but also the effect of frequency is considered by calculating the luminance difference for the part.
  • the brightness difference in the search window is 2 h for the width of the search window and the basic image I.
  • the coordinates of I 1 (x., Y 0 ), (x have y x) in luminance difference to C. (X., Y 0 ) and C x (x y,) can be calculated recursively as shown in Fig. 49.
  • Fig. 50 shows a diagram when the corresponding points are automatically collated.
  • H i. R (di 0, 1 ⁇ i 0) R (- ⁇ i 0 )
  • Equation 8 R (d i +, - ! ⁇ ,)..
  • base image I. the I
  • the point p " s internally divided into the ratio of s also has the same y-coordinate in the parallelized coordinates, so that it can be interpolated as in Equation 8.
  • the basic image I., 1 2 The point p " s 2 which internally divides into a ratio of 31 is shown as Equation 9. Equation 8
  • H s H s .
  • H s 2 can be calculated for a set of the basic images I and I 2 .
  • the rotation matrix each R to collimate the I have I 2, R (d Medical theta JR (d 2, and theta 2) where, R (d Medical ⁇ (d Q, ⁇ .) ;.) Is the rotating shaft angle theta around the di;. showing the rotation of the rotation matrix R (di, ⁇ s) is represented by 4 1 Figure (a).
  • Basic image I. , I and I 2 are R ( ⁇ .) And ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ , respectively.
  • the rotation matrix R about the ⁇ axis ( ⁇ J is shown in Fig. 41 (b).
  • the shear deformation matrix of 3 X 3 for I of the basic image I is given by 3 X for I of the basic image I 2 .
  • This process is called parallelization rotation transformation (transformation of three images into parallel). Performs the process of horizontal rotation conversion to arrange on the line.In the case of 2 images, it is sufficient to perform the conversion so that the y-coordinate of the epipole is 0. However, in the case of 3 images, the epipole is defined for each set of images. , the above method can not be used. Therefore, the first basic image I., performs collimation for the set of I i, with respect to the base image I 2, relative to the basic image I., the coordinate system of the I i Seek a proper conversion.
  • the rotation matrix R (0) is obtained by substituting FIG. 42 (c) into FIG. 41 (b).
  • the basic image Ii Next, the basic image I.
  • I 1 2 is the process of the internal homography H st (that is, the internal homography matrix to be finally determined) at the point of internally dividing the ratio of (11-1 s — t): s: t.
  • the interpolation process of the graphic matrix is calculated as follows.
  • the texture mapping in S230 will be described below.
  • Flowchart showing an example of the process flow of texture mapping The figure is shown in Fig. 9.
  • the surface selecting means 11 S300.
  • the surface selection process will be described later.
  • the processing for the contours of the faces is executed in the contour processing means 12 (S360).
  • the process of the contour processing will be described later.
  • This contour processing is a process that avoids the approximation of a curved surface when defining a surface by approximating it with a rough polygon.
  • the color correction means 13 executes color correction processing for the surface.
  • the texture image when the texture image is selected as in S300, the image may be selected from different images between adjacent surfaces, so that the color difference that occurs between adjacent surfaces is eliminated and the image becomes closer to a natural image. This is the process.
  • the color correction process will be described later. It is preferable that the contour processing in S360 and the color correction processing in S410 are provided in the process of texture matching, but all or only one of them is provided. And it is not necessary to be equipped. ⁇ Face selection process>
  • FIG. 10 is a flowchart showing an example of the flow of the surface selection process.
  • the surface selecting means 11 selects which surface is to be a texture image.
  • the texture image reference number for each surface is determined once by preprocessing. Which of the two basic images is to be used is determined by the following two criteria.
  • Figure 25 shows a conceptual diagram of the texture selection criteria. (Criterion 1) Surface orientation
  • each image Is calculated (S320), and the image having the larger area is used as the texture image of the surface (S330).
  • the texture image of the surface S340. That is, as shown in plane 2 in FIG. 25, when one is facing up and the other is facing down, the image facing up is the texture image of that face.
  • the surface used as the texture image in the surface selecting means 11 is simply selected from the two images, but also the two images are combined in accordance with the viewpoint to create a new image.
  • An image may be selected as a texture image. This is because, in the above-described plane selection method, one image is selected from two images, and the texture image is frequently switched depending on the viewpoint movement. It is.
  • combining two images according to the viewpoint means that the brightness of image A is A, the brightness of image B is B, and the viewpoint between image A and image B is from image B to the viewpoint. If the distance ratio is d (0 ⁇ d ⁇ 1), the combination ratio is calculated by the following equation.
  • the ratio of the distance from the viewpoint is 10% from A
  • the brightness S of the combined image when it is 90% is 0.9 A + 0.1 B.
  • the luminance S of the combined image is 0.5 A + 0.5 B.
  • the viewpoint is located between image A and image B.
  • FIG. 11 is a flowchart showing an example of the flow of the contour processing process.
  • this is realized by the following method capable of creating a more natural image than the conventional method.
  • a mask is created in the original image to erase anything other than the object, A new quadrilateral surface is added outside the contour, and the texture with the mask is mapped.
  • the mask can be created by manually painting the background from the original image using image editing software, as shown in Fig. 26.However, if the background is a single color such as blueback, it should be automatically extracted. Is also possible.
  • the quadrilateral outside the plane is called the offset plane.
  • adjacent faces are called adjacent faces.
  • Edges with no adjacent surfaces are called external boundary edges
  • edges with adjacent surfaces facing outward and one facing outward are defined as internal boundary edges
  • the external and internal boundary edges are collectively called boundary edges.
  • the point at which the vertex coordinates of the surface are averaged is taken as the center of gravity of the surface
  • the outward unit vector that passes through the center of gravity and is perpendicular to the ridgeline is the ridgeline offset vector.
  • the average unit vector of the boundary ridge offset vector including the vertex is called the vertex offset vector
  • the offset vertex the point at which the vertex is moved in the vertex offset vector direction
  • the process of displaying the offset plane is as follows. First, the ridge offset vector is calculated for each image (S370). The calculated edge offset vector is averaged to calculate a vertex offset vector (S380). For the created interpolated image, an offset plane composed of the vertices at both ends of the boundary ridge line and the vertices moved in the offset vector direction from it is displayed (S390). Thereafter, a normal plane is displayed (S400).
  • FIG. 12 is a flowchart showing an example of the flow of the color correction process.
  • a texture image is selected in the plane selection means 11 of S300, a different image may be selected between adjacent planes, and the color difference at the boundary may be conspicuous and an unnatural image may be obtained. .
  • the color difference is corrected by mixing two texture images.
  • the mixing ratio is attenuated according to the distance from the ridgeline.
  • This process is a process that is calculated only once for the texture image, and is executed as follows. First, for each pixel inside the plane, the distance from the ridge line having a different texture image reference number on the adjacent plane to the pixel is determined (S420). Next, let W i be the weight of the ridge line at the pixel point, and let di be the shortest distance from the ridge line i. Let r be the distance from the center of gravity of the surface to the ridge line. Is calculated (S430), and the weight w of the ridge line is calculated from Expression 11 (S440). Formula 1 1
  • N a 2D affine transformation matrix with 6 degrees of freedom.
  • T be the transformation matrix that translates so that mi is the origin
  • R be the rotation matrix around the origin such that mm 2 is the X axis
  • S be the shear deformation matrix
  • A is the displacement in the X direction
  • b is the displacement in the y direction.
  • two points m and m 2 move to m and m ' 2 , respectively.
  • the affine transformation matrix N is represented by the equation shown in FIG. Fig. 32 is a conceptual diagram showing the process of calculating the shadow. Using the surface information when projecting and restoring, the transformed vertices are drawn and a semi-transparent surface is displayed to create a shadow. Naturally, the projected object is displayed after the shadow display processing. ⁇ Reflection creation process>
  • the reflection image is also created using the affinity transformation matrix of the object that has been projected and restored in the same way as the shadow. Since the two-dimensional affine transformation has six degrees of freedom, the transformation matrix can be determined by specifying two points of the object on the reflecting floor and one point that appears to be reflected.
  • Fig. 13 shows an example of the process flow of the position and size correction process.
  • the distance from the camera to the target object and the position of the target object in the basic image generally differ from viewpoint to viewpoint. If the interior images are continuously created while these are different, displacement of the object other than the movement of the viewpoint occurs, and the object is apparently enlarged or reduced, or the object is displayed at a fixed point. This is to solve the problem of moving up and down without rotating around.
  • target frame information information indicating the range of the target object (the upper, left, lower, and right position coordinates in the basic image, which is hereinafter referred to as “target frame information”) is represented by S100 in the two-dimensional image. Set when importing.
  • a storage medium storing a software program for realizing the functions of the present embodiment is supplied to the system, and a computer of the system reads and executes the program stored in the storage medium.
  • the program itself read from the storage medium realizes the functions of the above-described embodiment, and the storage medium storing the program naturally constitutes the present invention.
  • a storage medium for supplying the program code for example, a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a magnetic tape, a nonvolatile memory card, or the like can be used.
  • the amount of data is small, and the problems that have existed up to now have been solved, thereby enabling more realistic display.
  • moving images can be created with a smaller data amount than conventional moving image data.
  • the movement speed between the path and the viewpoint can be set arbitrarily, which makes it possible to express more interactive moving images.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention porte sur un système d'affichage d'image et un procédé permettant d'afficher l'image d'un objet vu de manière stéréoscopique à partir d'un point de vue arbitraire sur un ordinateur. Le système d'affichage d'image destiné à créer une image d'un objet intercalaire directement à partir d'un point de vue souhaité par l'utilisateur comprend un serveur présentant des moyens de capture d'une image bidimensionnelle de l'objet à partir d'une pluralité de points de vue et un terminal utilisateur destiné à émettre/recevoir des données vers/du serveur via un réseau. Le terminal utilisateur présente des moyens de réception de l'image bidimensionnelle provenant du serveur afin de créer l'image intercalaire de l'objet directement depuis le point de vue souhaité par l'utilisateur.
PCT/JP2001/008677 2001-02-26 2001-10-02 Systeme d'affichage d'image et procede associe WO2002069277A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2001050353 2001-02-26
JP2001-50353 2001-02-26

Publications (1)

Publication Number Publication Date
WO2002069277A1 true WO2002069277A1 (fr) 2002-09-06

Family

ID=18911320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2001/008677 WO2002069277A1 (fr) 2001-02-26 2001-10-02 Systeme d'affichage d'image et procede associe

Country Status (1)

Country Link
WO (1) WO2002069277A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10240967A (ja) * 1997-02-21 1998-09-11 Go Jo モデル画像を用いた3次元物体のグラフィクスアニメーション装置及び方法
JP2001067463A (ja) * 1999-06-22 2001-03-16 Nadeisu:Kk 異なる視点からの複数のフェイシャル画像に基づき新たな視点からのフェイシャル画像を生成するフェイシャル画像生成装置及び方法並びにその応用装置及び記録媒体

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10240967A (ja) * 1997-02-21 1998-09-11 Go Jo モデル画像を用いた3次元物体のグラフィクスアニメーション装置及び方法
JP2001067463A (ja) * 1999-06-22 2001-03-16 Nadeisu:Kk 異なる視点からの複数のフェイシャル画像に基づき新たな視点からのフェイシャル画像を生成するフェイシャル画像生成装置及び方法並びにその応用装置及び記録媒体

Similar Documents

Publication Publication Date Title
JP4548840B2 (ja) 画像処理方法、画像処理装置、画像処理方法のプログラムおよびプログラム記録媒体
Sullivan et al. Automatic model construction and pose estimation from photographs using triangular splines
JP5011168B2 (ja) 仮想視点画像生成方法、仮想視点画像生成装置、仮想視点画像生成プログラムおよびそのプログラムを記録したコンピュータ読み取り可能な記録媒体
US7463269B2 (en) Texture data compression and rendering in 3D computer graphics
KR101560508B1 (ko) 3차원 이미지 모델 조정을 위한 방법 및 장치
JP3876142B2 (ja) 画像表示システム
JP3524147B2 (ja) 3次元画像表示装置
JP2000067267A (ja) 三次元シーンにおける形状及び模様の復元方法及び装置
JP3104638B2 (ja) 3次元画像作成装置
KR20190062102A (ko) 비디오 영상기반 2d/3d ar 실감체험 방법 및 장치
EP1443464A2 (fr) Représentation d'objects d'un point de vue prédéterminé utilisant des micro-facettes
JPH09330423A (ja) 三次元形状データ変換装置
JP4370672B2 (ja) 三次元画像生成装置および三次元画像生成方法、並びにプログラム提供媒体
JP6719596B2 (ja) 画像生成装置、及び画像表示制御装置
JP2010152529A (ja) 頂点テクスチャマッピング装置及びプログラム
WO2002069277A1 (fr) Systeme d'affichage d'image et procede associe
JP3309841B2 (ja) 合成動画像生成装置および合成動画像生成方法
CN108429889A (zh) 一种高光谱十亿像素视频生成方法
JP4308367B2 (ja) 3次元画像生成装置および環境マップの生成方法
JP2003337953A (ja) 画像処理装置および画像処理方法、並びにコンピュータ・プログラム
JP4168103B2 (ja) テクスチャ画像の生成・マッピングシステム
Jankó et al. Creating entirely textured 3d models of real objects using surface flattening
JPH06259571A (ja) 画像合成装置
Heng et al. Keyframe-based texture mapping for rgbd human reconstruction
JP2004227095A (ja) テクスチャマップ作成方法、テクスチャマップ作成用プログラムおよびテクスチャマップ作成装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase