CN117372655A - Information processing device, information processing method, and program - Google Patents
Information processing device, information processing method, and program Download PDFInfo
- Publication number
- CN117372655A CN117372655A CN202211658415.9A CN202211658415A CN117372655A CN 117372655 A CN117372655 A CN 117372655A CN 202211658415 A CN202211658415 A CN 202211658415A CN 117372655 A CN117372655 A CN 117372655A
- Authority
- CN
- China
- Prior art keywords
- virtual space
- dimensional
- data
- camera
- information processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 30
- 238000003672 processing method Methods 0.000 title claims description 4
- 238000012545 processing Methods 0.000 claims abstract description 38
- 238000009877 rendering Methods 0.000 claims description 9
- 238000000034 method Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 17
- 238000005286 illumination Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 8
- 238000013500 data storage Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000003760 hair shine Effects 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Development Economics (AREA)
- Economics (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The present application provides an information processing device capable of drawing a virtual space image based on a two-dimensional image, comprising: an acquisition unit that acquires two-dimensional image data in which an object or a landscape is captured, a data generation unit that generates three-dimensional virtual space data including the object or the landscape based on the two-dimensional image data, and a drawing processing unit that draws a virtual space image seen from a virtual camera based on the three-dimensional virtual space data. The data generating unit generates three-dimensional virtual space data as a three-dimensional model.
Description
Technical Field
The present disclosure relates to an information processing apparatus, an information processing method, and a program.
Background
Patent document 1 discloses an image generation system capable of causing a display corresponding to an object to appear in a virtual space.
Prior art literature
Patent literature
Patent document 1: japanese patent laid-open No. 2020-107252
Disclosure of Invention
However, patent document 1 does not specifically disclose a method for generating a virtual space, and particularly does not disclose a technique for drawing a virtual space image based on a two-dimensional image such as a photograph.
Accordingly, in one aspect, an object of the present invention is to provide an information processing apparatus capable of drawing a virtual space image based on a two-dimensional image.
In one aspect, there is provided an information processing apparatus including:
an acquisition unit that acquires two-dimensional image data in which an object or a landscape is captured;
a data generation unit that generates three-dimensional virtual space data including the object or landscape based on the two-dimensional image data; and
and a rendering unit that renders a virtual space image seen from the virtual camera based on the three-dimensional virtual space data.
Effects of the invention
In one side, according to the present invention, a virtual space image can be depicted based on a two-dimensional image.
Drawings
Fig. 1 is a diagram showing the configuration of an information processing system including an information processing apparatus of the present embodiment.
Fig. 2 is a diagram illustrating a camera position when photogrammetry (photogrammetry) is applied.
Fig. 2A is a diagram illustrating a camera position when a photogrammetry is applied.
Fig. 3 is a diagram illustrating a screen of a person or the like depicted in a three-dimensional virtual space.
Fig. 4 is a diagram showing a state in which the position of the virtual camera is switched in accordance with the movement of the person.
Fig. 5 is a diagram illustrating a 360 degree panoramic photograph.
Fig. 6 is a diagram showing an example in which an in-store space of a clothing retail store is depicted as a virtual space.
Fig. 6A is a diagram showing an example in which an in-store space of a clothing retail store is depicted as a virtual space.
Fig. 6B is a diagram showing an example in which an in-store of a clothing retail store is depicted as a virtual space.
Fig. 7 is a diagram showing an example of a street drawn as a virtual space.
Fig. 7A is a diagram illustrating an example of a street as a virtual space.
Detailed Description
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings.
(configuration example of information processing apparatus)
Fig. 1 is a diagram showing the configuration of an information processing system including an information processing apparatus of the present embodiment.
The information processing system of the present invention is characterized by drawing a virtual space image seen from a virtual camera based on two-dimensional image data in which a photograph of an object or a landscape is captured. By drawing an object or a landscape that is actually present as a virtual space image, the virtual space image can be utilized in various ways.
As shown in fig. 1, the information processing apparatus 10 includes an acquisition unit 11 that acquires two-dimensional image data of an object or a landscape to be captured, a data generation unit 12 that generates three-dimensional virtual space data including the object or the landscape based on the two-dimensional image data, a drawing processing unit 13 that draws a virtual space image seen from a virtual camera based on the three-dimensional virtual space data, and an image data storage unit 14 that stores an image captured by the camera 20, image data in processing performed by the acquisition unit 11, the data generation unit 12, and the drawing processing unit 13, and the like. The photograph (two-dimensional image) taken by the camera 20 is stored in the image data storage unit 14 at a proper time. The image data and the like in the processing operated by the acquisition unit 11, the data generation unit 12, and the drawing processing unit 13 are stored in the image data storage unit 14 at a timing, and are utilized by the acquisition unit 11, the data generation unit 12, and the drawing processing unit 13 at a timing.
The hardware configuration of the information processing apparatus 10 is arbitrary. For example, the information processing apparatus 10 may be configured by one or more mobile terminals and one or more computers, or may be configured by combining one or more mobile terminals and one or more computers. The information processing apparatus 10 may be a part of a mobile terminal or one or more servers (computers) connectable to other computers. The image data storage unit 14 may be configured using a storage medium provided in a mobile terminal, a storage medium provided in a computer including a server, a memory card, and other external storage devices.
The functions of the acquisition unit 11, the data generation unit 12, and the drawing processing unit 13 can be realized by application software (program) installed in a mobile terminal or a computer, for example.
Two-dimensional image data of the object or the scenery captured can be generated based on the image data obtained by capturing the object or the scenery captured by the camera 20. The acquisition unit 11 acquires image data obtained by capturing with the camera 20, and performs various processes described later, thereby generating two-dimensional image data to be transferred to the data generation unit 12.
The object or landscape photographed by the camera 20 is arbitrary, and includes, for example, parks or shops, tourist attractions, theme parks, and the like. By photographing an object or a landscape in the three-dimensional virtual space from various angles, the reality of the three-dimensional virtual space can be improved. In particular, in order to reproduce the three-dimensional shape of an object in a three-dimensional virtual space by a photogrammetry or other method as described later, it is necessary to photograph the same object from different multiple directions.
The type of camera 20 used for photographing is arbitrary, and a camera provided in a mobile terminal, a digital single lens reflex camera, a panoramic camera (360-degree camera), or the like can be used. In order to properly and stably fix the camera 20, it is desirable to use a tripod or a level, etc.
The acquisition unit 11 generates two-dimensional image data based on a picture (photograph) taken by the camera 20. The function of the acquisition unit 11 can be constructed by application software installed in a mobile terminal or a computer (personal computer).
The acquisition unit 11 performs processing necessary for generating image data transferred to the data generation unit 12 on an image obtained by capturing with the camera 20. For example, in order to apply an appropriate gradation of color tone to two-dimensional image data, at the time of photographing, the exposure amount (shutter speed) may be switched in a plurality of stages and photographed a plurality of times at the same photographing position and photographing direction, and a plurality of image data having different exposure amounts may be acquired. In this case, by combining one image data (corresponding to one photograph) from a plurality of image data by the function of the acquisition unit 11, it is possible to obtain gradation in tone corresponding to a wide range of luminance from a dark portion such as the shade to a bright portion where direct light of the sun irradiates.
The acquisition unit 11 may perform a process of adjusting the hue of the two-dimensional image data. In this case, it is desirable to use a color chart in order to obtain an appropriate color tone. For example, it is desirable to take an image in a state of being reflected on a color chart according to the type of the camera 20 used for the image taking, the image taking condition of illumination light (for example, sunlight, light of an incandescent bulb, or the like), or the like. Thus, by the function of the acquisition unit 11, color correction can be performed to reproduce the color of the color chart according to the type or shooting condition of the camera 20, and the color reproducibility can be improved. Further, it is desirable to photograph a color chart in a state where uniform light, such as direct sunlight, is irradiated to the entire color chart.
A fisheye photograph obtained by using a panoramic camera (360 degree camera) can be used as two-dimensional image data by converting the fisheye photograph into a so-called panoramic photograph. In this case, the acquisition unit 11 may be provided with a function of converting the fisheye photograph into the panoramic photograph by installing application software.
In addition, the acquisition unit 11 may appropriately perform correction processing using color temperature, color coverage correction, and exposure amount with respect to an image (photograph) captured by the camera 20 or an image at each stage in the processing. Such processing may be performed in consideration of the convenience of the user, without being limited to the case of optimizing the final two-dimensional image data.
The data generating unit 12 generates three-dimensional virtual space data including an object or a landscape to be imaged based on the two-dimensional image data acquired by the acquiring unit 11.
For example, the data generation unit 12 performs processing of associating a panoramic photograph obtained using a panoramic camera (360 degree camera) with the position of the virtual camera that is the same as the photographing position of the camera 20. That is, in the present embodiment, the three-dimensional virtual space data is not limited to a so-called three-dimensional model generated using a photogrammetry or the like described later. In the present disclosure, "three-dimensional virtual space data" is used as a concept that substantially includes two-dimensional image data in association with the position of the camera 20. In this case, the drawing processing unit 13 can draw a correct virtual space image under the condition that the position of the virtual camera matches the position of the camera 20. According to this method, the virtual space image can be effectively drawn while suppressing the data amount of the image data.
In this method, in order to ensure continuity of drawing when switching the positions of the virtual cameras, it is necessary to appropriately maintain the intervals of the shooting positions of the cameras 20. When the interval of the photographing positions of the cameras 20 is too wide, there is a possibility that it is difficult to recognize continuity of the viewing angle movement when switching the positions of the virtual cameras.
Further, this method is not premised on the use of a panoramic camera (360 degree camera), for example, by the camera 20, and can be effectively utilized if shooting at a wide angle is possible to some extent.
The data generation unit 12 can generate three-dimensional virtual space data including the object or the scenery based on the two-dimensional image data acquired by the acquisition unit 11. Specifically, the data generation unit 12 generates a three-dimensional model of the subject from a plurality of two-dimensional image data common to the subject, for example, using a photogrammetry method. Thus, a three-dimensional model representing, for example, the shape of the ground (terrain) or the shape of a building is obtained. Photogrammetry can be performed by the installation of application software. For example, when a landscape is created as a three-dimensional model, an object or a moving object can be accurately shielded by a structure when the object or the moving object is wound around the back surface of the structure or the like.
However, in order to acquire a three-dimensional model of an object to be imaged from a two-dimensional image (photograph) of the object to be imaged by a photogrammetry method, in principle, it is necessary to take a plurality of images of the same object to be imaged at different angles.
Fig. 2 and 2A are diagrams illustrating camera positions when a photogrammetry is applied.
In the photogrammetry, for example, the same shape is recognized from the photos before and after continuous shooting, the position and posture of the camera 20 at the time of shooting are reversely calculated based on the movement amount of the part of the same shape, and a three-dimensional model is made based on the two-dimensional image from the position of the camera 20. Therefore, two or more images obtained by photographing the same photographing range from different positions of the camera 20, that is, two or more images having a visual difference are required. As shown in fig. 2, since a visual difference is generated by shifting the position of the camera 20 for each lens, a two-dimensional photograph can be made three-dimensional. In contrast, as shown in fig. 2A, even when the imaging angle of the camera 20 is different for each lens, the visual difference does not occur and the three-dimensional model cannot be generated when imaging is performed from the same position.
In the case of applying the photogrammetry to a wide range, a plurality of images are required to be sequentially photographed under photographing conditions in which photographing ranges overlap from different camera positions. In this case, a three-dimensional model based on the photogrammetry can be constructed by ensuring sufficient overlapping in the shooting range without interruption between the front and rear images or between the specified images. If the difference between the objects photographed between the front and rear images is large, the common shape is not recognized, and the process by the photogrammetry cannot be performed.
In general, in the case of generating a three-dimensional model by photogrammetry, although a plurality of photographs are required, a wide range can be constructed as a three-dimensional model. In addition, according to different shooting conditions, the imaging device can also be corresponding to separated places or high places, and can also correspond to a fine model. Therefore, the photogrammetry is suitable for outdoor or wide places (gym, concert hall, etc.), places where a large distance can be seen, a wide range of movement is to be ensured, a detailed three-dimensional model is required, or the like. Furthermore, photogrammetry is suitable when three-dimensional models are required for dense parts of high trees, or when three-dimensional perception of distant sceneries is desired when viewing angles are switched, as is the appearance of sunlight in forests that shines from leaf voids.
On the other hand, there are points where photographing takes time and the steps of generating a three-dimensional model are easily increased, and points where photographing in a dark place is difficult and failure in generating a three-dimensional model is easily caused as drawbacks of the photogrammetry. Further, as drawbacks of the photogrammetry method, there are a point where there are few characteristic points for identifying the surface of a white wall or the like of an element color, and it is difficult to generate a three-dimensional model, a point where the generated three-dimensional model does not reflect the actual scale, and a job of adjusting the model size is required, and the like.
Instead of the photogrammetry, a three-dimensional model may be generated using a mobile terminal or the like equipped with a camera to which a three-dimensional scanning application is mounted. In this case, the acquisition unit 11 generates a three-dimensional model of the object to be imaged based on scanning by infrared rays or the like instead of processing by the photogrammetry. In this case, for example, a process for generating a three-dimensional model is possible by installing prescribed application software in a mobile terminal with a camera or a computer or the like that acquires a captured image by a camera. By performing infrared imaging, a dark place, a white wall of a plain color, or the like, which is difficult to be associated with by a photogrammetry, can be scanned, and a three-dimensional model can be acquired for these objects. As other advantages of the case of using the three-dimensional scanning application, there are advantages in that it takes time to photograph without being required to easily obtain a three-dimensional model as in the case of using the photogrammetry, and unlike the photogrammetry, since the size of the three-dimensional model coincides with the actual scale, it is not necessary to resize or the like. On the other hand, as a disadvantage of the case of using the three-dimensional scanning application, there are a point where it is difficult to recognize the shape of an object in a dark portion, a point where it is difficult to recognize the shape of a minute shape or a thin object, a point where it leaves or where it is impossible to photograph it in a high place, and the like.
The method of generating a three-dimensional model by three-dimensional scanning application is suitable for generating a terrain model in a room, a park, a garden, or the like, without requiring a model in a far or high place, except for the case of saving labor in shooting. Further, as a guide for conforming a three-dimensional model generated by a photogrammetry to an actual size, a three-dimensional model generated by a three-dimensional scanning application can be used. In this case, even when the three-dimensional scanning application is applied to only a part of the subject, the three-dimensional model can be used as an effective guide.
The data generation unit 12 can apply illumination that stably reproduces the illumination (illumination, light emission state) state of the site to the three-dimensional model or perform a shadow of the three-dimensional model.
The drawing processing unit 13 draws a virtual space image seen from the virtual camera based on the three-dimensional virtual space data generated by the data generating unit 12.
The position of the virtual camera may include the position of the camera 20 capturing the object or landscape corresponding to the two-dimensional image data acquired by the acquisition unit 11. When the position of the virtual camera matches the position of the camera 20, the drawing processing unit 13 can draw a virtual space image corresponding to the object or scene captured by the camera 20. For example, the drawing processing unit 13 draws a virtual space image seen from a virtual camera using a panoramic photograph captured by a panoramic camera (360-degree camera) or a photograph having a wide angle. In this case, by matching the position of the virtual camera with the position of the camera 20 such as a panoramic camera (360 degree camera), a panoramic photograph can be directly and effectively used for the virtual space image.
Further, if a three-dimensional model is obtained, the position of the virtual camera can be set at a position that does not coincide with the camera 20 capturing the photographic subject or landscape. In this case, the drawing processing unit 13 can calculate and draw a virtual space image based on the three-dimensional virtual space data generated by the data generating unit 12. For example, the rendering processing unit 13 renders a virtual space image using a three-dimensional model obtained by a photogrammetry method or by another method. In this case, the drawing processing unit 13 can generate a virtual space image from a virtual camera located at an arbitrary position based on the three-dimensional model. That is, the position of the virtual camera is not limited to the shooting position of the camera 20. The position and angle of the virtual camera can be controlled in accordance with, for example, the user's operation, and the user can experience a virtual space seen from an arbitrary position of the virtual camera.
The drawing processing unit 13 can further dispose a moving object such as an object or a person generated as a three-dimensional model in the virtual space or move the moving object in the virtual space.
The view of the virtual space image drawn by the drawing processing unit 13 is also managed by the same three-dimensional coordinates based on the three-dimensional model generated by the data generating unit 12. Thus, the coordinates of the virtual camera can be used as a reference to grasp that the object or the moving object is positioned on the near front side or the rear side with respect to the landscape, and can be reflected in the virtual space image. For example, in an image area where an object or a moving body is located on the near front side than a landscape, the landscape is covered with an image of the object or the moving body. In contrast, in an image area where the landscape is located on the near front side compared to the object or moving body, the object or moving body is covered with an image of the landscape.
The virtual space image drawn by the drawing processing unit 13 can be used for various purposes.
For example, a user who enters a virtual space constructed on a line can participate in the virtual space by own avatar (body) corresponding to the moving body. Further, for example, by freely moving an avatar in a virtual space in which a 360-degree field of view is prepared, it is possible to simulate a park, tourist attraction, or the like in which an actual experience exists. In this case, the drawing processing unit 13 draws a background (an object or a landscape captured as two-dimensional image data) on the avatar, and can draw the avatar moving in the virtual space as a virtual space image. The rendering processing unit 13 can sequentially select the position and the shooting direction of the virtual camera according to the position of the avatar as if the avatar is reflected in the virtual space image.
When a plurality of avatar participate in a common virtual space, users can communicate with each other via avatars participating in the virtual space. In this case, communication between users in the actual presence can be realized.
In addition, the virtual space can be utilized in various economic activities. For example, in a virtual space of a store, a commodity may be made available for purchase. In this case, the user can not only purchase the target commodity but also make an experience of shopping in the actual store.
In addition, virtual space can also be utilized in advertisements. For example, a commodity (including labor) can be expressed as a living avatar, and can communicate with an avatar that is a user's individual. Thus, for example, the avatar representing the commodity and the user can provide the user with the same feeling as a friend through a common experience, and the advertising effect can be improved.
The user can also be made to simulate traveling by building the street or facilities of the travel destination as a virtual space. By setting the position and direction of the virtual camera to be operable by the user, the user can simulate a landscape or the like of a street or facility experiencing a travel destination. Alternatively, a user acting as an avatar is able to make a simulated experience of walking freely in the street or facilities of the travel destination. Such a simulation experience can enhance the interest of the user in traveling, and therefore effectively functions as an advertisement for traveling companies and the like.
Next, an example of generating a park landscape using a photographed image based on a panoramic camera (360-degree camera) is shown. In this example, by the user operating a mobile character as a three-dimensional model, the user can be provided with an experience such as walking within an actual existing park.
Fig. 3 is a diagram illustrating a screen of a person or the like drawn in a three-dimensional virtual space, and fig. 4 is a diagram showing a state in which the position of the virtual camera is switched in accordance with the movement of the person. Fig. 4 shows five captured screens each having a different position of the virtual camera. Fig. 5 is a view illustrating a 360-degree panoramic photograph.
In this example, a landscape of a park is generated by photographing the park with a panoramic camera (360-degree camera), and a character 101 that overlaps with the landscape and moves in the park is depicted. The landscape of the park is projected onto a three-dimensional model of the floor, which will be described later, and the character 101 moves on the three-dimensional model of the floor. The character's 101 are operated by a game pad or the like in the same manner as the operations with respect to the character in the game.
First, a landscape is photographed at a plurality of camera positions by a panoramic camera (360-degree camera), and converted into a panoramic photograph as shown in fig. 5.
The position of the virtual camera within the three-dimensional virtual space in the game is fixed to the camera position based on the panoramic camera (360 degree camera). That is, the position of the virtual camera is sequentially selected from among a plurality of camera positions of the panoramic camera (360-degree camera) at the time of photographing, but the view can be freely viewed 360 degrees from each position of the selected virtual camera. When the angle of view (the position of the virtual camera) is switched, the angle of view can be smoothly moved by crossfading the landscape from the angle of view, that is, smoothly changing from the landscape before switching to the landscape after switching. Further, the marks 102A and 102B in fig. 3 and the mark 102 in fig. 4 indicate the viewing angle positions projected onto the ground.
In addition, a panoramic photograph of a landscape (fig. 5) can also be used as an image for image-based Illumination (IBL) used for reproducing illumination of a scene.
As shown in fig. 3, the person's 101 can be fused with the landscape by applying light such that the light state of the scene is stably reproduced. As the illumination, image-based Illumination (IBL) and directional light can be set, and the illumination can be smoothly changed in conjunction with switching of the viewing angle. Furthermore, by applying a dedicated shadow map, the addition of redundant shadows to the view can be avoided. As shown in fig. 3, the shadow 101a of the character's line 101 corresponds to a shadow of sunlight or the like, and the shadow 101a can be projected onto the ground in accordance with a three-dimensional terrain model and terrain. Furthermore, image-based Illumination (IBL) is applicable not only to outdoors but also to indoor illumination.
The three-dimensional terrain model in the three-dimensional virtual space can be generated using, for example, a mobile terminal or the like equipped with a camera to which the three-dimensional scanning application is mounted. According to this method, there is an advantage that the imaging can be performed easily and the work load for three-dimensional modeling can be reduced as compared with the case of using the photogrammetry.
The generated three-dimensional terrain model is used as an object for projecting a landscape based on the panoramic photograph. The character 101 can walk or run along the undulation of the ground projected onto the terrain model.
Further, the three-dimensional terrain model within the three-dimensional virtual space can be generated by photogrammetry, for example, based on images of terrain taken by other cameras than panoramic cameras (360-degree cameras). In the case of applying the photogrammetry, time and effort are spent in the process of photographing or three-dimensional formation, but it is preferable in the case where a precise three-dimensional model is desired.
In addition, a three-dimensional terrain model can be generated by using the panoramic photograph of the landscape by using a predetermined application. In this case, the topography with the camera position as a reference can be restored. However, in order to obtain a correct terrain model, it is necessary to increase the imaging points (camera positions), and the work load increases.
The generated three-dimensional terrain model can also be used as a collision model for the movement of the performance character 101 or the footstep sound, for example. The collision model may be prepared separately from the terrain model, and the physical model may be added with footstep sounds or the like.
In addition to the character 101, a three-dimensional model generated by a photogrammetry method may be arranged in the virtual space. For example, in the case where the object 103 in fig. 3 is arranged in the virtual space, a three-dimensional model of the object 103 may be generated by a photogrammetry method or the like, and the installation position and the installation direction in the virtual space may be specified. If the object 103 is a moving body, the position or orientation of the object 103 may also be controlled by the user. The illumination and shading application for the object 103 is reproduced in the same way as the illumination or shading for the person 101. For example, in fig. 3, by projecting the shadow 101b of the person's 101 toward the object 103 in the illumination for the object 103, the shadow 103a of the object 103 is projected toward the ground.
Fig. 6 to 6B are diagrams showing examples in which the interior of a clothing retail store is depicted as a virtual space. In this example, the functions of the acquisition unit 11 and the data generation unit 12 generate a three-dimensional model based on the in-store photograph taken by the camera 20, and the processing unit 13 depicts the in-store appearance as a virtual space depiction based on the three-dimensional model.
As shown in fig. 6 to 6B, an avatar 201 (user's body) as a moving body can freely move in a store in accordance with the user's operation. As shown in fig. 6B, in addition to the avatar 201, a clerk or other guest also appears as moving bodies 202a to 202c.
Further, a user who finds a commodity to be purchased can purchase the commodity in the virtual space. For example, when the user selects the commodity 203a or the commodity 203b in fig. 6A, a display frame 204a and a display frame 204b corresponding to the respective commodities are depicted. The user can purchase the corresponding commodity by operating these display frames 204a and 204b. The virtual space reproduces not only the actual in-store but also the merchandise placed in the in-store, so that the user can experience the real in-store situation.
In this way, the user can walk freely in the store as the avatar 201 and find the merchandise. By entering the virtual space, the user can not only purchase the commodity, but also easily experience shopping from a place where the user cannot actually go, such as a store away from the user's home.
Fig. 7 to 7A are diagrams showing examples in which streets are depicted as virtual spaces. In this example, by the functions of the acquisition unit 11 and the data generation unit 12, a three-dimensional model is generated based on a photograph of a street taken by a camera, and the drawing processing unit 13 draws the street as a virtual space based on the three-dimensional model.
As shown in fig. 7 to 7A, an avatar 301 (user's body) as a moving body can freely move in a store in accordance with the user's operation. In addition to the avatar 301, other pedestrians are also represented by a moving body 302 or the like.
In the example of fig. 7 to 7A, the associated label 305 is associated with a display element such as a building or a facility, etc. on which the screen is displayed. The label 305 indicates information describing the display element, and includes names of commercial facilities reflected on the screen. Such a label 305 is automatically labeled by image recognition using artificial intelligence with respect to, for example, a photograph taken by the camera 20, that is, a display element of the photograph acquired by the acquisition unit 11. The label 305 is acquired by the acquiring unit 11 in association with the display element, and is stored in the image data storage unit 14 in association with the display element. When drawing the display element, the drawing processing unit 13 acquires the tag 305 associated with the display element from the image data storage unit 14, and displays the tag 305 in association with the display position of the display element. In the examples of fig. 7 to 7A, the control is performed such that the tab 305 is moved in association with the position of a predetermined portion of the commercial establishment on the screen, and when the predetermined portion is separated from the screen, the display of the tab 305 is also lost from the screen.
In the examples of fig. 7 to 7A, an example is shown in which a street is depicted as a virtual space, but for example, a point where a tourist attraction, various facilities, a theme park, or the like actually exists is depicted as a wide virtual space. The user can easily experience free walking on the actually existing place as long as the user accesses the virtual space.
In the above-described embodiment, an example is shown in which a virtual space is generated based on an actually existing landscape or the like, and for example, a so-called CG movie, CG animation, and CG game made of three-dimensional CG can be experienced by replacing the stage of the movie, animation, and game with a virtual space.
Further, the technique of the present invention for creating a virtual space can be simplified to a technique that can be handled by a general person, and a scheme can be constructed in which photographed photo data can be uploaded as virtual space data. Thus, many people can easily utilize the technique of the present invention, for example, metaspatialization of hometown landscape by students living in places and the like can contribute to activities associated with regional development.
As described above, according to the information processing apparatus 10 of the present embodiment, a virtual space image can be drawn based on a two-dimensional image. Therefore, the virtual space can be easily experienced. For example, a user can be provided with an experience of accessing a venue based on a photograph of the venue actually present. The virtual space image drawn by the drawing processing unit 13 can be used for various purposes, and it is needless to say that the virtual space image is participated in by an avatar (body) or a character, and the present invention is also applicable to use of economic activities or advertisements.
Further, according to the information processing apparatus 10 of the present embodiment, an actual photograph can be used when generating a virtual space. In this case, the work time and the work cost can be significantly suppressed compared with the generation of a virtual space as so-called high-quality three-dimensional CG data equivalent to a photograph.
The embodiments have been described in detail above, but the present invention is not limited to the specific embodiments, and various modifications and alterations are possible within the scope described in the scope of the claims. In addition, all or a plurality of the constituent elements of the foregoing embodiments may be combined.
Description of the reference numerals
10 information processing apparatus
11 acquisition unit
12 data generating section
13 drawing processing section
14 image data storage unit
A 20 camera.
Claims (9)
1. An information processing device is provided with:
an acquisition unit that acquires two-dimensional image data in which an object or a landscape is captured;
a data generation unit that generates three-dimensional virtual space data including the object or landscape based on the two-dimensional image data; and
and a rendering unit that renders a virtual space image seen from the virtual camera based on the three-dimensional virtual space data.
2. The information processing apparatus according to claim 1, wherein,
the data generation unit generates the three-dimensional virtual space data as a three-dimensional model.
3. The information processing apparatus according to claim 1, wherein,
the data generating unit generates the three-dimensional virtual space data as data of an image associated with a position corresponding to a camera capturing an object or a landscape,
the drawing processing unit draws a virtual space image viewed from a virtual camera disposed at the position based on the three-dimensional virtual space data.
4. The information processing apparatus according to claim 1, wherein,
the drawing processing unit controls the position or angle of the virtual camera based on the operation of the user.
5. The information processing apparatus according to claim 1, wherein,
the data generating section generates a three-dimensional model of the object or the moving body,
the rendering processing unit renders the three-dimensional model generated by the data generating unit so as to overlap the virtual space image.
6. The information processing apparatus according to claim 5, wherein,
the drawing processing unit switches the position of the virtual camera in accordance with the position of the moving body in the virtual space.
7. The information processing apparatus according to claim 1, wherein,
the drawing processing unit associates and displays a label associated with a display element included in the two-dimensional image data with the display element in the virtual space image.
8. An information processing method includes:
an acquisition step of acquiring two-dimensional image data in which a photographic subject or a landscape is captured;
a data generation step of generating three-dimensional virtual space data including the object or landscape based on the two-dimensional image data; and
and a rendering processing step of rendering a virtual space image seen from the virtual camera based on the three-dimensional virtual space data.
9. A program for causing a computer to execute the steps of:
an acquisition step of acquiring two-dimensional image data in which a photographic subject or a landscape is captured;
a data generation step of generating three-dimensional virtual space data including the object or landscape based on the two-dimensional image data; and
and a rendering processing step of rendering a virtual space image seen from the virtual camera based on the three-dimensional virtual space data.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022109933 | 2022-07-07 | ||
JP2022-109933 | 2022-07-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117372655A true CN117372655A (en) | 2024-01-09 |
Family
ID=85158985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211658415.9A Pending CN117372655A (en) | 2022-07-07 | 2022-12-22 | Information processing device, information processing method, and program |
Country Status (2)
Country | Link |
---|---|
JP (2) | JP7218979B1 (en) |
CN (1) | CN117372655A (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024200063A1 (en) * | 2023-03-24 | 2024-10-03 | Interdigital Ce Patent Holdings, Sas | Avatar metadata representation |
JP7556484B1 (en) | 2024-02-06 | 2024-09-26 | Toppanホールディングス株式会社 | Information processing device, information processing program, and information processing method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4983494B2 (en) | 2007-09-07 | 2012-07-25 | カシオ計算機株式会社 | Composite image output apparatus and composite image output processing program |
JP6201170B2 (en) | 2013-03-29 | 2017-09-27 | 株式会社コナミデジタルエンタテインメント | Application control program, application control method, and application control apparatus |
JP6856572B2 (en) | 2018-04-04 | 2021-04-07 | 株式会社コロプラ | An information processing method, a device, and a program for causing a computer to execute the information processing method. |
JP2020144748A (en) | 2019-03-08 | 2020-09-10 | 株式会社Jvcケンウッド | Information processor, method for processing information, and program |
-
2022
- 2022-10-28 JP JP2022173474A patent/JP7218979B1/en active Active
- 2022-12-22 CN CN202211658415.9A patent/CN117372655A/en active Pending
-
2023
- 2023-01-19 JP JP2023006654A patent/JP2024008803A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
JP2024008779A (en) | 2024-01-19 |
JP7218979B1 (en) | 2023-02-07 |
JP2024008803A (en) | 2024-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117372655A (en) | Information processing device, information processing method, and program | |
CN100534158C (en) | Generating images combining real and virtual images | |
CN110874818B (en) | Image processing and virtual space construction method, device, system and storage medium | |
US20030202120A1 (en) | Virtual lighting system | |
US20050195332A1 (en) | Image processing method and apparatus | |
US20080316432A1 (en) | Digital Image Projection System | |
US20110001935A1 (en) | Digital image projection system | |
JP2006053694A (en) | Space simulator, space simulation method, space simulation program and recording medium | |
KR102435185B1 (en) | How to create 3D images based on 360° VR shooting and provide 360° VR contents service | |
JP2020035392A (en) | Remote communication system and the like | |
JP2015506030A (en) | System for shooting video movies | |
Marner et al. | Exploring interactivity and augmented reality in theater: A case study of Half Real | |
JP2015099545A (en) | Image generation system and image generation program | |
RU2735066C1 (en) | Method for displaying augmented reality wide-format object | |
Takatori et al. | Large-scale projection-based immersive display: The design and implementation of largespace | |
TWI515691B (en) | Composition video producing method by reconstruction the dynamic situation of the capture spot | |
JP3387856B2 (en) | Image processing method, image processing device, and storage medium | |
JP3392078B2 (en) | Image processing method, image processing device, and storage medium | |
JP7447403B2 (en) | Information processing device, information processing system, information processing method and program | |
JP2022093262A (en) | Image processing apparatus, method for controlling image processing apparatus, and program | |
Lee | Wand: 360∘ video projection mapping using a 360∘ camera | |
Woodward et al. | Case Digitalo-A range of virtual and augmented reality solutions in construction application | |
KR102516780B1 (en) | User upload VR system | |
Benítez Iglesias et al. | Multi-Camera Workflow Applied to a Cultural Heritage Building: Alhambra’s Torre de la Cautiva from the Inside | |
WO2023090038A1 (en) | Information processing apparatus, image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40105180 Country of ref document: HK |