WO2023084783A1 - 投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 - Google Patents
投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 Download PDFInfo
- Publication number
- WO2023084783A1 WO2023084783A1 PCT/JP2021/041948 JP2021041948W WO2023084783A1 WO 2023084783 A1 WO2023084783 A1 WO 2023084783A1 JP 2021041948 W JP2021041948 W JP 2021041948W WO 2023084783 A1 WO2023084783 A1 WO 2023084783A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- view volume
- projection
- view
- volume
- shape
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/30—Clipping
Definitions
- the present disclosure relates to a projection program, and more particularly to projection using a virtual camera in virtual space.
- Patent Document 1 discloses a technique related to a virtual camera for displaying a three-dimensional model drawn in a virtual space to a user.
- CG animation a three-dimensional model to be imaged is designed before a projection process of capturing a three-dimensional model with a virtual camera and projecting it as a two-dimensional image, as shown in Patent Document 1. Including the design process.
- a 3D model is designed, for example, by a designer using CG animation creation software.
- the present disclosure has been made to solve such problems, and the purpose thereof is to create images of expressions that cannot occur in the real world in CG animation without redesigning the already designed three-dimensional model. It is to project.
- a projection program in the present disclosure is a projection program that projects an object arranged in a three-dimensional virtual space.
- a projection program according to the present disclosure provides a computer with the steps of: setting coordinates of a first view volume in a virtual space; setting coordinates of a second view volume that is different from the first view volume in the virtual space; transforming the first view volume into a first shape; transforming the second view volume into the first shape; combining the transformed first view volume and the transformed second view volume; Generating a projection view volume and projecting an image based on the first projection view volume are performed.
- a projection method is a projection method for projecting an object arranged in a three-dimensional virtual space.
- a projection method includes the steps of setting coordinates of a first view volume in virtual space; setting coordinates of a second view volume different from the first view volume in virtual space; transforming the volume to a first shape; transforming a second view volume to the first shape; combining the transformed first view volume and the transformed second view volume to obtain a first projection view Generating a volume and projecting an image based on the first projection view volume.
- a projection system is a projection system that projects an object arranged in a three-dimensional virtual space.
- a projection system in the present disclosure comprises a memory and a processor.
- the processor when executing computer-executable instructions stored in the memory, sets coordinates of a first view volume in the virtual space and a second view volume in the virtual space that is different from the first view volume. transforming the first view volume to the first shape; transforming the second view volume to the first shape; transforming the first view volume and the transformed second view and the volume to generate a first projection view volume; and projecting an image based on the first projection view volume.
- a computer-readable medium in the present disclosure is a computer-readable medium that includes computer-executable instructions that, when executed by a processor, cause the processor to execute a projection method for projecting an object placed on a three-dimensional virtual space.
- a projection method executed by a computer-readable medium according to the present disclosure includes the steps of setting coordinates of a first view volume in a virtual space; transforming the first view volume into a first shape; transforming the second view volume into the first shape; transforming the transformed first view volume and transforming the transformed second view volume; Combining to generate a first projection view volume; and projecting an image based on the first projection view volume.
- the projection program according to the present disclosure deforms the first view volume and the second view volume in the virtual space into the same shape (first shape) and synthesizes them as one projection view volume. Since one space obtained by transforming and synthesizing two different spaces is set as an imaging range, an image can be projected based on a space that cannot be realized in the space of the real world. Therefore, with such a configuration, in CG animation, it is possible to project an image that cannot occur in the real world without redesigning a designed three-dimensional model.
- FIG. 2 is a block diagram of an information processing device that executes a projection program according to Embodiment 1;
- FIG. 4 is a diagram for explaining projection by a projection program in Embodiment 1.
- FIG. 4 is a perspective view of two view frustums that are imaging ranges of a virtual camera according to Embodiment 1.
- FIG. It is the figure which planarly viewed the view frustum from the positive direction side of a Z-axis.
- FIG. 4 is a conceptual diagram for explaining perspective projection transformation based on two view frustums; 4 is a flow chart showing a procedure of projection processing based on a projection program in Embodiment 1.
- FIG. FIG. 11 is a diagram for explaining projection using only a virtual camera of a comparative example;
- FIG. 11 is a diagram for explaining projection using only a virtual camera of a comparative example;
- FIG. 11 is a perspective view of three view frustums that are imaging ranges of a virtual camera in Embodiment 2;
- FIG. 10 is a plan view of the near clip surface and the far clip surface shown in FIG. 9;
- FIG. 4 is a conceptual diagram for explaining perspective projection transformation based on three view frustums;
- FIG. 10 is a diagram for explaining projection using a projection program in Embodiment 2;
- FIG. FIG. 11 is a diagram for explaining projection using only a virtual camera of a comparative example;
- FIG. 11 is a perspective view of two view frustums that are imaging ranges of a virtual camera in Embodiment 3;
- FIG. 11 is a diagram for explaining projection by a projection program according to Embodiment 3; FIG. It is a figure which shows the example which rotated the view frustum with respect to the imaging direction.
- FIG. 4 is a diagram for explaining parallel projection;
- FIG. 1 is a first diagram for explaining objects arranged in a virtual space;
- FIG. 2 is a second diagram for explaining objects arranged in a virtual space;
- FIG. 3 is a third diagram for explaining objects arranged in a virtual space; 4 is an image projected by the view volume indicated by the dashed line; 4 is an image projected by the view volume of the first comparative example;
- 10 is an image projected by the view volume of the second comparative example;
- 10 is an image projected by the view volume of the third comparative example;
- FIG. 1 is a block diagram of an information processing device 100 that executes a projection program 10 according to Embodiment 1.
- the information processing device 100 is typically a general-purpose PC (desktop computer, notebook computer), smart phone, tablet terminal, or the like.
- the configuration of the information processing device 100 will be described below.
- the information processing apparatus 100 includes a CPU 101 , an input interface (I/F) 102 , an output interface (I/F) 103 , a storage device 104 , a main memory 105 and a GPU 106 .
- the CPU 101 comprehensively controls the information processing apparatus 100 .
- the CPU 101 corresponds to the "computer” or “processor” of the present disclosure.
- CPU 101 is connected to input device 200 via input interface (I/F) 102 .
- CPU 101 is connected to display device 300 via output interface (I/F) 103 .
- the input device 200 and the display device 300 are connected to the information processing device 100 by wire or wirelessly.
- Input device 200 is typically a keyboard, mouse, or the like.
- Display device 300 is typically a display. Input device 200 and display device 300 may be provided integrally, for example, as a touch panel.
- the storage device 104 is typically a non-volatile memory such as an HDD (Hard Disk Drive) or an SSD (Solid State Drive).
- CG animation creation software 11 and projection program 10 are stored in storage device 104 .
- the CG animation creation software 11 and the projection program 10 may be stored in a server that can communicate with the information processing apparatus 100, or may be stored in an external memory such as an SD card that is removable from the information processing apparatus 100. .
- the CG animation creation software 11 is software such as Maya (registered trademark) or Unity that can execute the design process and projection process of CG animation.
- the projection program 10 is a program for projecting an object arranged in a virtual space in the CG animation creation software 11 as a two-dimensional moving image. That is, the information processing apparatus 100 can execute the projection program 10 while the CG animation creation software 11 is running. Note that the projection program 10 may be provided as a separate external program different from the CG animation creation software 11 .
- the main memory 105 is a volatile memory.
- a main memory 105 is used as a work memory or a buffer memory for the CPU 101 .
- the main memory 105 also stores image generation data (polygon data, texture data, etc.) necessary for the GPU 106 to execute graphics commands (drawing instructions).
- the main memory 105 stores image data for one frame of the display device 300, for example.
- the GPU 106 rewrites the image data stored in the main memory 105 every frame (for example, 1/60 second). Specifically, the main memory 105 stores image color information for each pixel.
- Main memory 105 may include VRAM, which is memory dedicated to displaying moving images.
- the GPU 106 generates image data according to graphics commands from the CPU 101 .
- the GPU 106 performs calculation processing necessary for displaying 3D graphics according to graphics commands, for example, processing such as coordinate conversion from 3D coordinates to 2D coordinates, which is preprocessing for rendering, and final rendering processing such as texture mapping.
- FIG. 2 is a diagram for explaining projection by the projection program 10 in the first embodiment.
- FIG. 2A is a plan view of the virtual space on the CG animation creation software 11.
- FIG. 2(B) is a diagram showing an image IM1 obtained by projecting the three-dimensional model P1 captured by the virtual camera DC shown in FIG. 2(A).
- the virtual space prepared in the CG animation creation software 11 such as Maya is a three-dimensional space represented by the X, Y, and Z axes.
- a user can design three-dimensional models of various shapes in the virtual space.
- Each independent three-dimensional model placed in the virtual space is called an object.
- a projection program 10 according to Embodiment 1 projects an object placed in a virtual space as a two-dimensional image.
- FIG. 2(A) shows a plan view of the XY plane of the virtual space indicated by the X, Y, and Z axes. Objects forming a floor (ground) are spread out on the XY plane of the virtual space.
- a three-dimensional model P1 is arranged on the XY plane.
- the three-dimensional model P1 is a humanoid object wearing a suit. As shown in FIGS. 2A and 2B, the three-dimensional model P1 has a head H, right hand Rh, left hand Lh, right foot Rf, and left foot Lf.
- a virtual camera DC is arranged in the virtual space for capturing an image of the three-dimensional model P1.
- a virtual camera DC is an object displayed in the virtual space by executing the projection program 10 in the first embodiment.
- the virtual camera DC in Embodiment 1 has an imaging range that is within the two view frustums V1 and V2.
- a viewing frustum means a space in the shape of a truncated quadrangular pyramid for projecting a two-dimensional image with a sense of perspective when projecting an object in a virtual space as an image.
- a region within the viewing frustum means a closed space surrounded by edges (sides) forming the viewing frustum. The appearance of the projected image differs depending on the shape of the viewing frustum.
- a camera that captures a virtual space using a viewing frustum is generally called a perspective camera.
- a virtual camera that captures an image of a virtual space uses one view frustum to capture an image. Take an image.
- the view frustum V1 in Embodiment 1 corresponds to the "first view volume” in the present disclosure.
- the view frustum V2 in Embodiment 1 corresponds to the "second view volume” in the present disclosure.
- a view volume is a space indicating an imaging range, and is also called a viewing volume.
- Each of the view frustums V1 and V2 in Embodiment 1 has the shape of a truncated quadrangular pyramid.
- FIG. 2A shows only one of the trapezoidal side surfaces of each of the view frustums V1 and V2.
- FIG. 3 is a perspective view of two view frustums V1 and V2 that are imaging ranges of the virtual camera DC according to the first embodiment.
- the view frustum V1 is arranged at a position closer to the virtual camera DC than the view frustum V2. That is, the virtual camera DC, the view frustum V1, and the view frustum V2 are arranged in the order of the virtual camera DC from the negative direction of the Y-axis, and the view frustum V1 and the view frustum V2 in the positive direction.
- the three-dimensional model P1 is arranged in regions within the view frustums V1 and V2.
- the head H, right hand Rh, left hand Lh, and left foot Lf are arranged in a region within the viewing frustum V1, and only the right foot Rf is arranged in a region within the viewing frustum V2.
- FIG. 2(B) shows an image IM1 of the three-dimensional model P1 captured by the virtual camera DC in FIG. 2(A).
- the image IM1 is two-dimensional image data without three-dimensional information such as coordinates.
- the image IM1 is generated by the CPU 101 executing the projection program 10.
- the display device 300 displays the generated image IM1.
- the image IM1 displays the head H, right hand Rh, left hand Lh, right foot Rf, and left foot Lf of the three-dimensional model P1.
- the imaging range of the virtual camera DC in Embodiment 1 is the area within the view frustum V1 and the area within the view frustum V2. In other words, an object that is not placed in either the area within the view frustum V1 or the area within the view frustum V2 is not captured and is not displayed in the image IM1.
- the viewing frustum V1 is a quadrangular pyramid having a near clip surface NC1 as the front surface and a far clip surface FC1 as the back surface.
- the viewing frustum V2 is a quadrangular pyramid having a near clip plane NC2 as the front surface and a far clip plane FC2 as the back surface.
- the near clip plane NC2 of the view frustum V2 is the same plane as the far clip plane FC1 of the view frustum V1.
- the far clip plane FC1 and the near clip plane NC2 may be collectively referred to as "intermediate clip plane IC".
- the near clip surface NC1 and the far clip surface FC1 are surfaces facing each other and parallel to the XZ plane.
- the near clip surface NC1 and the far clip surface FC1 are rectangular and similar.
- the area of the near clip plane NC1 is smaller than the area of the far clip plane FC1.
- the near clip surface NC2 and the far clip surface FC2 are surfaces facing each other and parallel to the XZ plane.
- the near clip surface NC2 and the far clip surface FC2 are rectangular and similar.
- the area of the near clip plane NC2 is smaller than the area of the far clip plane FC2.
- the viewing frustum V1 shown in FIG. 3 is a regular quadrangular pyramid.
- the viewing frustum V2 shown in FIG. 3 is also a regular square pyramid.
- each of the near clip plane NC1 and near clip plane NC2 may be simply referred to as “near clip plane NC”.
- far clip plane FC1 and the far clip plane FC2 may be simply referred to as “far clip plane FC”.
- the area of the near clip surface NC1 is the smallest and the area of the far clip surface FC2 is the largest. That is, the area of the intermediate clip plane IC is larger than the area of the near clip plane NC1 and smaller than the area of the far clip plane FC2.
- a clipping plane FCZ indicated by a dashed line indicates a position when the far clipping plane FC1, which is the back surface of the viewing frustum V1, is arranged on the same plane as the far clipping plane FC2 of the viewing frustum V2.
- a direction CD indicates the imaging direction of the virtual camera DC.
- a direction CD is a direction parallel to the positive direction of the Y-axis.
- FIG. 4 is a plan view of the view frustum V1 and the view frustum V2 from the positive direction side of the Z axis.
- FIG. 2A only one virtual camera DC is displayed in the virtual space, but in the internal processing of the projection program 10, one virtual camera is arranged in each of the view frustums V1 and V2. . That is, the CPU 101 arranges the virtual camera C1 and the virtual camera C2 in the virtual space based on the projection program 10, but causes the display device 300 to display only the virtual camera C1.
- the projection program 10 can display the two view frustums V1 and V2 as imaging ranges corresponding to one virtual camera DC.
- the virtual camera C1 is a virtual camera corresponding to the view frustum V1.
- the virtual camera C2 is a virtual camera corresponding to the view frustum V2.
- the virtual camera C1 is arranged at the same position as the virtual camera DC shown in FIG. 2(A). That is, the direction CD1, which is the imaging direction of the virtual camera C1, is the same direction as the direction CD, which is the imaging direction of the virtual camera DC. Also, the direction CD2, which is the imaging direction of the virtual camera C2, is the same direction as the direction CD.
- the angle of view Ag1 is the angle of view of the virtual camera C1.
- the angle of view Ag2 is the angle of view of the virtual camera C2. In the example shown in FIG. 4, the angle of view Ag2 is larger than the angle of view Ag1. If a perspective camera with a large angle of view is used, a stereoscopic image with perspective is projected. On the other hand, if a perspective camera with a small angle of view is used, a planar image without perspective is projected.
- FIG. 5 is a conceptual diagram for explaining perspective projection transformation based on two view frustums V1 and V2.
- Perspective projection transformations are commonly used to project objects placed within the region of a single viewing frustum as a two-dimensional image.
- the CPU 101 synthesizes the view frustum V1 and the view frustum V2 according to the projection program 10, and performs perspective projection conversion.
- a view frustum V1 is shown in FIG. 5(A).
- the CPU 101 normalizes the view frustum V1 as shown in FIG.
- the normalization process is, for example, a preparatory process for moving the origin of the XYZ axes to the center of the view frustum V1 to reduce the processing load when converting the view frustum V1 into a two-dimensional image.
- the CPU 101 transforms the shape of the view frustum V1 into a rectangular parallelepiped shape NV1.
- the shape of the object placed in the area within the viewing frustum V1 is also deformed. That is, the degree of deformation of an object placed near the near clip plane NC1 is greater than the degree of deformation of an object placed near the far clip plane FC1.
- the size of an object near the virtual camera is enlarged, so that a stereoscopic image with a sense of perspective can be projected as if it were captured by a real camera.
- FIG. 5(C) shows the view frustum V2.
- the CPU 101 normalizes the view frustum V2 according to the projection program 10, and transforms the shape of the view frustum V2 into a shape NV1 as shown in FIG. 5(D).
- Shape NV1 is a cuboid shape.
- the CPU 101 synthesizes the view frustum V1 transformed into the shape NV1 and the view frustum V2 transformed into the shape NV1 according to the projection program 10 . As shown in FIG. 5E, the CPU 101 superimposes the view frustum V1 transformed into the shape NV1 and the view frustum V2 transformed into the shape NV1. That is, the CPU 101 adjusts the arrangement of the view frustum V1 of the shape NV1 and the view frustum V2 of the shape NV1 so that the far clip plane FC1 and the near clip plane NC2 have the same coordinates.
- the view volume after synthesizing the view frustum V1 and the view frustum V2 of the shape NV1 is referred to as a "projection view volume PV". Since the view frustum V1 and the view frustum V2 are deformed into the same shape NV1, when the projection view volume PV is planarly viewed from the positive direction of the Y axis, only the deformed far clip plane FC2 is seen. be able to. Also, when the projection view volume PV is planarly viewed from the negative direction side of the Y axis, only the deformed near clip plane NC1 can be seen.
- the CPU 101 executes projection processing for projecting a two-dimensional image onto the projection view volume PV. Before executing the projection process, the CPU 101 deforms the shape of the projection view volume PV again based on the size of the image to be output.
- the CPU 101 projects as an image how the object appears when the projection view volume PV is planarly viewed from the negative direction side of the Y axis. In this manner, the CPU 101 can project objects within the two view frustums V1 and V2 as one image according to the projection program 10.
- FIG. The projection program 10 according to the first embodiment performs image projection processing after synthesizing a plurality of view frustums, unlike normal perspective projection conversion of view frustums.
- FIG. 6 is a flow chart showing the procedure of projection processing based on the projection program 10 according to the first embodiment.
- the CPU 101 executes the flowchart shown in FIG. 6 according to the projection program 10 .
- the CPU 101 sets the coordinates and imaging directions of the virtual cameras C1 and C2 in the virtual space within the CG animation creation software 11 (step S1). For example, CPU 101 sets the coordinates and imaging directions of virtual cameras C ⁇ b>1 and C ⁇ b>2 based on input from a user using CG animation creation software 11 .
- the CPU 101 sets the coordinates of the near clip planes NC1 and NC2 based on the coordinates of the virtual camera C1 and the virtual camera C2 (step S2).
- the coordinates of the near clip plane NC are the coordinates of the four corners of the near clip plane, which is a rectangular surface.
- the CPU 101 may set the coordinates of the near-clip plane based on a relative distance from predetermined camera coordinates, or may set the coordinates based on an input from the user.
- the CPU 101 sets the coordinates of the far clip planes FC1 and FC2 based on the coordinates of the virtual cameras C1 and C2 (step S3).
- the coordinates of the far clip plane FC are the coordinates of the four corners of the far clip plane, which is a rectangular plane.
- the CPU 101 may set the coordinates of the far clip plane based on a relative distance from predetermined camera coordinates, or based on an input from the user.
- the CPU 101 sets the coordinates of the view frustum V1 and the view frustum V2 in the virtual space (step S4).
- the CPU 101 calculates the shape of the view frustum V1 based on the coordinates and imaging direction of the virtual camera C1, the coordinates of the near clip plane NC1, and the coordinates of the far clip plane FC1.
- the CPU 101 sets the coordinates of the view frustum V1 based on the calculation result.
- the CPU 101 calculates the shape of the view frustum V2 based on the coordinates and imaging direction of the virtual camera C2, the coordinates of the near clip plane NC2, and the coordinates of the far clip plane FC2.
- the CPU 101 sets the coordinates of the view frustum V2 based on the calculation result.
- the CPU 101 transforms the view frustum V1 and the view frustum V2 into the same shape (step S5). That is, the view frustum V1 and the view frustum V2 are transformed into the shape NV1 shown in FIG. At this time, the CPU 101 also deforms the shapes of the objects placed in the regions within the view frustums V1 and V2 in accordance with the deformation of the view frustums.
- the CPU 101 synthesizes the view frustum V1 and the view frustum V2 after deformation into the same shape, and generates a projection view volume PV (step S6).
- the CPU 101 projects the image IM1 based on the projection view volume PV (step S7).
- it is possible to perform perspective projection conversion based on the two view frustums V1 and V2 and project the image IM1.
- FIG. 7 is a diagram for explaining projection using only the virtual camera C1Z of the comparative example.
- FIG. 8 is a diagram for explaining projection using only the virtual camera C2Z of the comparative example.
- the virtual camera C1Z in FIG. 7A has the same angle of view Ag1 as the virtual camera C1 in FIG. 2A.
- the imaging range of the virtual camera C1Z is the view frustum V1Z.
- the virtual camera C2Z in FIG. 8A has the same angle of view Ag2 as the virtual camera C2 in FIG. 2A.
- the imaging range of the virtual camera C2Z is the view frustum V2Z.
- FIGS. 2(A), 7(A), and 8(A) the shape of the three-dimensional model P1 in the virtual space and the coordinates at which the three-dimensional model P1 is arranged are the same. That is, FIGS. 2A, 7A, and 8A differ only in the virtual camera and its imaging range.
- FIG. 7A the three-dimensional model P1 is arranged in a region within the viewing frustum V1Z.
- FIG. 7B shows an image IM2 obtained by projecting the three-dimensional model P1 captured by the virtual camera C1Z shown in FIG. 7A.
- FIG. 2(B) and FIG. 7(B) Comparing FIG. 2(B) and FIG. 7(B), the appearance of the right foot Rf of the three-dimensional model P1 is different. Specifically, in FIG. 2(B), the right foot Rf appears smaller than the right foot Rf in FIG. 7(B). In other words, the image IM1 of FIG. 2B is expressed as if the right foot Rf is farther from the camera than the image IM2 of FIG. 7B.
- the distance between the right foot Rf and the left foot Lf in FIG. 2(B) is expressed to be longer than the distance between the right foot Rf and the left foot Lf in FIG. 7(B).
- the projection program 10 according to the first embodiment can project the image IM1 of FIG. 2B so that it has a greater sense of perspective than the image IM2 of FIG. 7B. can improve the feeling.
- the right foot Rf of the three-dimensional model P1 is This is because the angles of view of the viewing frustums in which are arranged are different.
- the right foot Rf is arranged within the region of the viewing frustum V2 of the angle of view Ag2.
- the right foot Rf is arranged within the region of the viewing frustum V1Z with the angle of view Ag1.
- the appearance of the projected image differs depending on the shape of the viewing frustum and the angle of view.
- the right foot Rf in FIG. 2A is arranged in a region within the viewing frustum V2 with a large angle of view. In this way, an expression in which only a portion of the three-dimensional model P1 has a different perspective is an expression that cannot occur in the real world where light has the property of traveling straight.
- FIG. 8(A) the three-dimensional model P1 is arranged in a region within the view frustum V2Z.
- FIG. 8(B) shows an image IM3 obtained by projecting the three-dimensional model P1 captured by the virtual camera C2Z shown in FIG. 8(A). Since the angle of view Ag2 of the virtual camera C2Z is wider than the angle of view Ag1 of the virtual camera C1Z, the perspective is exaggerated in the image IM3 as a whole. Specifically, the right hand portion Rh of FIG. 8B is shown larger than the right hand portion Rh of FIG. 2B. Also, the head H in FIG. 8(B) appears smaller than the head H in FIG. 2(B).
- the projection program 10 can project the image IM1 only by setting the virtual camera DC, and there is no need to deform the shape of the three-dimensional model P1.
- the user can project the same image as when redesigning the shape of the three-dimensional model of the right foot Rf only by setting the coordinates of the two frustums V1 and V2. .
- the projection program 10 according to the first embodiment can project an image IM1 that cannot occur in the real world without redesigning the already designed three-dimensional model P1.
- Embodiment 2 In the projection program 10 of Embodiment 1 described above, the configuration for projecting an image using two view frustums has been described. In Embodiment 2, a configuration for projecting an image using three or more view frustums will be described. In the second embodiment, the description of the configuration overlapping with that of the first embodiment will not be repeated.
- FIG. 9 is a perspective view of three view frustums V1, V2, and V3, which are imaging ranges of the virtual camera DC in Embodiment 2.
- FIG. 9 the virtual camera DC according to the second embodiment projects objects arranged in regions within three view frustums V1, V2, and V3 as images.
- the view frustum V3 is located farther from the virtual camera DC than the view frustums V1 and V2.
- the viewing frustum V3 is a quadrangular pyramid having a near clip plane NC3 as its front surface and a far clip plane FC3 as its back surface.
- the near clip plane NC3 of the view frustum V3 is the same plane as the far clip plane FC2 of the view frustum V2.
- near clip plane NC3 and far clip plane FC2 are collectively referred to as "intermediate clip plane IC2".
- the arrangement of the near clip plane NC1 and the far clip plane FC1 of the view frustum V1 in the second embodiment differs from the arrangement of the near clip plane NC1 and the far clip plane FC1 of the view frustum V1 in the first embodiment.
- 10 is a plan view of the near clip surface NC1 and the far clip surface FC1 shown in FIG. 9. FIG.
- FIG. 10 shows a plan view of the near clip surface NC1 in FIG. 9 from the negative direction side of the Y axis.
- the center point NCP is the center point of the near clip plane NC1.
- the center point FCP is the center point of the far clip plane FC1.
- the center point NCP of the near clip plane NC1 and the center point FCP of the far clip plane FC1 do not overlap. That is, the viewing frustum V1 in the second embodiment is not a regular quadrangular pyramid. In other words, the lengths of the four sides connecting the near clip surface NC1 and the far clip surface FC1 are not the same.
- the view frustum V2 in the second embodiment is not a regular square pyramid, and when viewed from the negative side of the Y axis, the center point of the near clip plane NC2 and the center point of the far clip plane FC2 do not overlap.
- the view frustum V3 is not a square pyramid, and when viewed from the negative side of the Y axis, the center point of the near clip plane NC3 and the center point of the far clip plane FC3 do not overlap. Therefore, as shown in FIG. 9, view frustums V1, V2, and V3, which are imaging ranges of the virtual camera DC, are arranged so as to draw curves when viewed from the virtual camera DC.
- FIG. 11 is a conceptual diagram for explaining perspective projection transformation based on three view frustums V1, V2, and V3.
- FIG. 11A is a diagram showing view frustums V1, V2, and V3 before conversion and synthesis.
- FIG. 11B is a diagram showing the projection view volume PV2 after conversion and synthesis.
- the view frustums V1, V2, and V3 after conversion are not shown because they are similar to FIG.
- the CPU 101 According to the projection program 10, the CPU 101 generates a projection view volume in the same procedure as in the first embodiment even when performing projection using the three view frustums V1, V2, and V3. That is, the CPU 101 transforms each of the three view frustums V1, V2, and V3 into the same shape.
- the view frustum V1 has been transformed into shape NV1.
- the view frustum V2 is deformed into shape NV1.
- the view frustum V3 has been transformed into shape NV1.
- the CPU 101 combines the view frustums V1 and V2 to generate the projection view volume PV. After that, the CPU 101 further combines the view frustum V3 of the shape NV1 with the projection view volume PV to generate the projection view volume PV2.
- the projection view volume PV corresponds to the "first projection view volume” of the present disclosure
- the projection view volume PV2 corresponds to the "second projection view volume” of the present disclosure.
- FIG. 12 is a diagram for explaining projection using the projection program 10 in the second embodiment.
- FIG. 12A is a perspective view of the virtual space.
- the floor FL spreads as an object on the XY plane.
- the floor FL is a flat surface without irregularities.
- Cylindrical objects Ob1 to Ob8 are arranged on the floor FL.
- the cylindrical objects Ob1 to Ob8 are arranged in a row in the X-axis direction.
- the virtual camera DC in Embodiment 2 is arranged on the positive side of the X axis of the object Ob8.
- FIG. 10 illustrates an example in which three view frustums are used as the imaging range, but the virtual camera DC in FIG. 12 has an imaging range of three or more (n) view frustums.
- the virtual camera DC in FIG. 12 has an imaging range of 100 view frustums.
- Each of the plurality of (n) viewing frustums is not a regular quadrangular pyramidal frustum, similar to the viewing frustums V1 to V3 described in FIG. That is, in FIG. 12, a plurality of (n) viewing frustums that are not regular square pyramid frustums are arranged. As a result, the entire imaging range of the virtual camera DC is shown as one large viewing frustum with a bent side connecting the front and back surfaces.
- the near clipping plane NC1 closest to the virtual camera DC is the near clipping plane of the first view frustum located closest to the virtual camera DC.
- the far clipping plane FCn furthest from the virtual camera DC is the far clipping plane of the n-th viewing frustum located furthest from the virtual camera DC.
- n ⁇ 1 intermediate clip planes IC are arranged between the near clip plane NC1 and the far clip plane FCn.
- the CPU 101 projects, as the image IM4, objects placed in multiple (n) areas within the view frustum.
- FIG. 12(B) shows an image IM4 obtained by projecting a plurality of (n) viewing frustums shown in FIG. 12(A).
- the floor FL which is a flat surface
- the floor FL which is a flat surface
- Projecting an image in which the floor FL, which is a flat surface, appears to be raised in this way cannot occur in the real world, in which light has the property of traveling straight.
- FIG. 13 is a diagram for explaining projection using only the virtual camera C3Z of the comparative example. 12 and 13, the floor FL in the virtual space, the shapes of the cylindrical objects Ob1 to Ob8, and the positions of the objects Ob1 to Ob8 are the same. Also, the virtual camera C3Z in FIG. 13 is arranged at the same coordinates as the virtual camera DC in FIG.
- the virtual camera C3Z in FIG. 13 has one view frustum VZ as its imaging range. That is, the near clip plane NC closest to the virtual camera C3Z is the near clip plane of the view frustum VZ of the comparative example, and the far clip plane FC closest to the virtual camera C3Z is also the far clip plane of the view frustum VZ. clip surface. That is, intermediate clip plane IC is not arranged between near clip plane NC and far clip plane FC in FIG.
- FIG. 13 the CPU 101 projects an object arranged in a region within one view frustum VZ as an image IM5.
- FIG. 13B shows an image IM5 projected onto one view frustum VZ shown in FIG. 13A.
- the floor FL which is a flat surface, is expressed as a flat surface as it is.
- the image IM4 in FIG. 12B shows the upper surfaces of the cylindrical objects Ob8, Ob7, and Ob6.
- the image IM5 of 13B the upper surfaces of none of the cylindrical objects Ob1 to Ob8 are shown.
- the projection program 10 can project the image IM4 in an expression that would never occur in the real world, without redesigning the designed three-dimensional model P1. Further, in the projection program 10 of Embodiment 2, by using a plurality of view frustums that are not square pyramid frustums, the entire imaging range is arranged so as to be curved. can be projected as if In other words, the space appears to be distorted.
- FIG. 14 is a perspective view of two view frustums V1 and V2, which are imaging ranges of the virtual camera DC in Embodiment 3.
- FIG. 14 a space G1 exists between the view frustum V1 and the view frustum V2. That is, the far clip plane FC1 of the view frustum V1 and the near clip plane NC2 of the view frustum V2 are arranged in the virtual space as separate planes.
- FIG. 15 is a diagram for explaining projection by the projection program 10 according to the third embodiment.
- the same floor FL as in FIG. 12 is arranged in virtual space.
- cylindrical objects Ob1, Ob2, and Ob3 are arranged on the floor FL.
- the cylindrical object Ob1 is placed between the near clip plane NC2 and the far clip plane FC2 of the view frustum V2. In other words, the cylindrical object Ob1 is placed in the region within the view frustum V2, and thus becomes the object of imaging.
- the cylindrical object Ob2 is arranged in the space G1 between the near clip plane NC2 of the view frustum V2 and the far clip plane FC1 of the view frustum V1. That is, the cylindrical object Ob2 is not placed in either the area within the view frustum V2 or the area within the view frustum V1, and is not a target for imaging.
- the cylindrical object Ob3 is arranged between the far clip plane FC1 and the near clip plane NC1 of the view frustum V1. In other words, the cylindrical object Ob3 is placed in the area within the view frustum V1, and thus becomes the imaging target.
- FIG. 15(B) shows an image IM6 obtained by projecting the two view frustums V1 and V2 shown in FIG. 15(A).
- Image IM6 shows only object Ob1 and object Ob3.
- the object Ob2 and the floor FL, which are placed in the space G1 outside the imaging range, are not shown in the image IM6.
- the projection program 10 can project the image IM6 in an expression that would never occur in the real world without redesigning the designed three-dimensional model P1. Further, in the projection program 10 of Embodiment 3, since the space G1 exists between the view frustum V1 and the view frustum V2, the object arranged in the space G1 is not displayed in the image IM6. can be done.
- FIG. 16 is a diagram showing an example in which the view frustum V2 is rotated with respect to the imaging direction.
- the view frustum V2 shown in FIG. 16 is arranged at a position obtained by rotating the view frustum V2 shown in FIG. 3 with the Y axis as the rotation axis.
- the far clip plane FC1 and the near clip plane NC2 are not the same plane.
- the center point of the far clip plane FC1 overlaps with the center point of the near clip plane NC1. That is, when the far clip surface FC1 is viewed from above, the far clip surface FC1 and the near clip surface NC2 are arranged at rotationally symmetrical positions.
- the projection program 10 of Modification 1 uses the view frustum V1 and the view frustum V2 can be combined to project an image. That is, the projection program 10 of Modification 1 can project an image in a representation that would not occur in the real world without redesigning the designed three-dimensional model P1.
- FIG. 17 is a diagram for explaining parallel projection.
- a view frustum used for perspective projection capable of projecting an image with a sense of perspective is used as the view volume.
- Modification 2 describes an example in which the projection program 10 is applied to parallel projection capable of projecting an image having no perspective.
- the imaging range is a space that cannot normally be the imaging range.
- the imaging range of the virtual camera is a rectangular parallelepiped space. This makes it possible to project an image without a sense of perspective, making it easier to compare the sizes of objects than with perspective projection.
- FIG. 17 shows rectangular parallelepiped view volumes VR1, VR2, and VR3 used for parallel projection.
- the view volumes VR1 to VR3 are arranged in the order of the view volumes VR1, VR2, and VR3 on the positive direction side of the Y-axis of the virtual camera DC in the second modification.
- the view volume VR1 has a near clip plane NC1 and a far clip plane FC1 facing each other.
- View volume VR2 has a near clip plane NC2 and a far clip plane FC2 that are opposed to each other.
- View volume VR3 has a near clip plane NC3 and a far clip plane FC3 that are opposed to each other.
- each of the near clip planes NC1, NC2, NC3 and the far clip planes FC1, FC2, FC3 have the same area.
- the near clip planes NC1, NC3 and the far clip planes FC1, FC3 are arranged to overlap each other when the near clip plane NC1 is viewed from above. That is, only the view volume VR2 is arranged at different positions in the X-axis direction.
- the projection program 10 in Embodiment 1 is applicable to parallel projection.
- the view volumes VR1 to VR3 already have the same shape, so the CPU 101 synthesizes the view volumes VR1 to VR3 to generate the projection view volume PV.
- the CPU 101 executes a step of transforming them into the same shape.
- the CPU 101 projects an image based on the generated projection view volume PV.
- the projection program 10 of the modified example 2 can project an image in an expression that cannot occur in the real world without redesigning the designed three-dimensional model P1.
- FIG. 18 is the first diagram for explaining the objects placed in the virtual space.
- FIG. 19 is a second diagram for explaining objects placed in the virtual space.
- FIG. 20 is the third diagram for explaining the objects arranged in the virtual space.
- FIGS. 18, 19, and 20 shows the same virtual space, their viewpoints are different.
- a three-dimensional humanoid model P2 is placed in a room, raising its left arm Ar1 to strike. That is, in the virtual space shown in FIGS. 18 to 20, the left arm Ar1 of the three-dimensional human model P2 is the object most desired to be noticed when projected as a moving image.
- a ceiling CE1, a floor FL2, and walls WA1 and WA2 are arranged as objects that make up the room.
- a door Dr1 is provided on the wall WA1.
- a window Wi1 is provided in the wall WA2.
- a virtual camera DC that captures the three-dimensional model P2 is arranged in the virtual space.
- Four solid lines Rg1 extending from the virtual camera DC represent ridgelines of the view volume of the comparative example.
- Four dashed lines Rg2 extending from the virtual camera DC represent ridgelines of the view volume in this embodiment. That is, the solid line Rg1, which is a straight line, indicates a view volume composed of one view frustum.
- the curved dashed line Rg2 indicates a view volume composed of multiple viewing frustums.
- FIG. 21 is an image IM7 projected by the view volume indicated by dashed line Rg2.
- a portion of the face of the three-dimensional model P2, the door Dr1, and the window Wi1 are displayed around the raised left arm Ar1 of the three-dimensional model P2.
- FIG. 22 is an image IM8 projected by the view volume of the first comparative example.
- FIG. 22 shows an image IM8 projected using one view frustum such that the door Dr1 has the same size and arrangement as the door Dr1 shown in the image IM7 of FIG.
- the entire humanoid three-dimensional model P2 is displayed.
- the notable left arm Ar1 is displayed smaller than in the image IM7.
- FIG. 23 is an image IM9 projected by the view volume of the second comparative example.
- FIG. 23 shows an image IM9 projected using one view frustum so that the humanoid three-dimensional model P2 has the same size and arrangement as the three-dimensional model P2 shown in the image IM7 of FIG. It is shown.
- Door Dr1 is not displayed in image IM9.
- FIG. 24 is an image IM10 projected by the view volume of the third comparative example.
- FIG. 24 shows an image IM10 projected using one view frustum such that the left arm Ar1 has the same size and arrangement as the left arm Ar1 shown in the image IM7 of FIG.
- the face of the humanoid three-dimensional model P2 is not displayed in the image IM10.
- the projection program 10 in this embodiment is used, the size of the left arm Ar1 to be focused on is increased, the door Dr1, and the face of the humanoid three-dimensional model P2 are projected. can be projected an image IM7 that can display .
- 10 projection program 11 animation creation software, 100 information processing device, 101 CPU, 102 input I/F, 103 output I/F, 104 storage device, 105 main memory, 106 GPU, 200 input device, 300 display device, Ag1, Ag2: Angle of view, Ar1: Left arm, C1, C2, C1Z to C3Z, DC: Virtual camera, CD, CD1, CD2: Direction, CE1: Ceiling, Dr1: Door, FC, FC1 to FC3, FCn: Far clip plane, FCP, NCP: Center point, FL , FL2 floor, G1 space, H head, IC, IC2 middle clip plane, IM1 to IM10 image, Lf left leg, Lh left hand, NC, NC1 to NC3 near clip plane, NV1 shape, Ob1 to Ob8 object, P1, P2 3D model, PV, PV2 projection view volume, Rf right leg, Rg1 solid line, Rg2 dashed line, Rh right hand, V1 to V3 viewing frustum, VR1, VR2, VR3 view volume, WA1, WA
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023559389A JPWO2023084783A1 (enrdf_load_stackoverflow) | 2021-11-15 | 2021-11-15 | |
PCT/JP2021/041948 WO2023084783A1 (ja) | 2021-11-15 | 2021-11-15 | 投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/041948 WO2023084783A1 (ja) | 2021-11-15 | 2021-11-15 | 投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023084783A1 true WO2023084783A1 (ja) | 2023-05-19 |
Family
ID=86335511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/041948 WO2023084783A1 (ja) | 2021-11-15 | 2021-11-15 | 投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2023084783A1 (enrdf_load_stackoverflow) |
WO (1) | WO2023084783A1 (enrdf_load_stackoverflow) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005040348A (ja) * | 2003-07-22 | 2005-02-17 | Nintendo Co Ltd | ゲームシステムおよびゲームプログラム |
JP2005321994A (ja) * | 2004-05-07 | 2005-11-17 | Nintendo Co Ltd | 描画ポリゴン数を増やす画像処理システム |
JP2005353047A (ja) * | 2004-05-13 | 2005-12-22 | Sanyo Electric Co Ltd | 立体画像処理方法および立体画像処理装置 |
JP2011120224A (ja) * | 2009-11-04 | 2011-06-16 | Nintendo Co Ltd | 表示制御プログラム、情報処理システム、および立体表示の制御に利用されるプログラム |
JP2011156061A (ja) * | 2010-01-29 | 2011-08-18 | Konami Digital Entertainment Co Ltd | ゲームプログラム、ゲーム装置、ゲーム制御方法 |
JP2012174238A (ja) * | 2011-02-24 | 2012-09-10 | Nintendo Co Ltd | 画像処理プログラム、画像処理装置、画像処理方法および画像処理システム |
WO2014119524A1 (ja) * | 2013-02-01 | 2014-08-07 | 株式会社セルシス | 三次元オブジェクトの多視点描画装置、方法、及びプログラム |
-
2021
- 2021-11-15 JP JP2023559389A patent/JPWO2023084783A1/ja active Pending
- 2021-11-15 WO PCT/JP2021/041948 patent/WO2023084783A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005040348A (ja) * | 2003-07-22 | 2005-02-17 | Nintendo Co Ltd | ゲームシステムおよびゲームプログラム |
JP2005321994A (ja) * | 2004-05-07 | 2005-11-17 | Nintendo Co Ltd | 描画ポリゴン数を増やす画像処理システム |
JP2005353047A (ja) * | 2004-05-13 | 2005-12-22 | Sanyo Electric Co Ltd | 立体画像処理方法および立体画像処理装置 |
JP2011120224A (ja) * | 2009-11-04 | 2011-06-16 | Nintendo Co Ltd | 表示制御プログラム、情報処理システム、および立体表示の制御に利用されるプログラム |
JP2011156061A (ja) * | 2010-01-29 | 2011-08-18 | Konami Digital Entertainment Co Ltd | ゲームプログラム、ゲーム装置、ゲーム制御方法 |
JP2012174238A (ja) * | 2011-02-24 | 2012-09-10 | Nintendo Co Ltd | 画像処理プログラム、画像処理装置、画像処理方法および画像処理システム |
WO2014119524A1 (ja) * | 2013-02-01 | 2014-08-07 | 株式会社セルシス | 三次元オブジェクトの多視点描画装置、方法、及びプログラム |
Non-Patent Citations (1)
Title |
---|
HIZUME MAKO, TAKESHI NAEMURA: " AnimE-Lise: Anime-like Exaggeration of Live-action Image using Skeleton Structure", IPSJ INTERACTION 2015, 7 March 2015 (2015-03-07), pages 851 - 856, XP093066087 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023084783A1 (enrdf_load_stackoverflow) | 2023-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3057066B1 (en) | Generation of three-dimensional imagery from a two-dimensional image using a depth map | |
WO2018188499A1 (zh) | 图像、视频处理方法和装置、虚拟现实装置和存储介质 | |
JP6698972B2 (ja) | 仮想物体表示制御装置、仮想物体表示システム、仮想物体表示制御方法、及び仮想物体表示制御プログラム | |
CN107452049B (zh) | 一种三维头部建模方法及装置 | |
CN104268922A (zh) | 一种图像渲染方法及图像渲染装置 | |
CN110568923A (zh) | 基于Unity3D的虚拟现实交互方法、装置、设备及存储介质 | |
CA2550512A1 (en) | 3d videogame system | |
JP7064257B2 (ja) | 画像深度確定方法及び生き物認識方法、回路、装置、記憶媒体 | |
CN112929651B (zh) | 一种显示方法、装置、电子设备及存储介质 | |
US7548236B2 (en) | Image processing method, image processing apparatus, image processing program, and storage medium | |
JP2001126085A (ja) | 画像生成システム、画像表示システム、画像生成プログラムを記録したコンピュータ読み取り可能な記録媒体および画像生成方法 | |
WO2024120151A1 (zh) | 一种图像渲染方法、装置、设备、存储介质及产品 | |
JP2023527438A (ja) | リアルタイム深度マップを用いたジオメトリ認識拡張現実効果 | |
WO2025087026A1 (zh) | 图像标注方法和装置及存储介质 | |
CN117710537A (zh) | 基于驱动视图的化身生成 | |
JP6549764B1 (ja) | 画像投影システム、画像投影方法、及びプログラム | |
CN112868052A (zh) | 用于提供具有六个自由度的至少部分内容的方法和系统 | |
WO2023084783A1 (ja) | 投影プログラム、投影方法、投影システム、およびコンピュータ可読媒体 | |
JP7394566B2 (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
WO2019163128A1 (ja) | 仮想物体表示制御装置、仮想物体表示システム、仮想物体表示制御方法、及び仮想物体表示制御プログラム | |
US11935264B2 (en) | Pose estimation with limited correspondences | |
JP5565126B2 (ja) | 立体印刷物制作支援装置、プラグインプログラム、立体印刷物制作方法および立体印刷物 | |
JP7506022B2 (ja) | 描画装置及びプログラム | |
CN115686202A (zh) | 跨Unity/Optix平台的三维模型交互渲染方法 | |
CN110209274A (zh) | 一种虚拟现实设备及虚拟现实图像生成方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21964136 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023559389 Country of ref document: JP Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21964136 Country of ref document: EP Kind code of ref document: A1 |