CN111566706A - Image generation system, image generation method, and program - Google Patents

Image generation system, image generation method, and program Download PDF

Info

Publication number
CN111566706A
CN111566706A CN201780097875.XA CN201780097875A CN111566706A CN 111566706 A CN111566706 A CN 111566706A CN 201780097875 A CN201780097875 A CN 201780097875A CN 111566706 A CN111566706 A CN 111566706A
Authority
CN
China
Prior art keywords
image
virtual
virtual space
live
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201780097875.XA
Other languages
Chinese (zh)
Inventor
向谷实
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Music Hall
Ongakukan Co Ltd
Original Assignee
Music Hall
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Music Hall filed Critical Music Hall
Publication of CN111566706A publication Critical patent/CN111566706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Abstract

The invention provides an image generation system, an image generation method and a program, which can generate an image for synthesizing a live-action image of a real space and an image of a virtual space. A real image storage unit (100) stores a real image captured by a camera provided on an actual railway vehicle moving in an actual space in association with a virtual vehicle position in a virtual space. A virtual vehicle position updating unit (106) updates the virtual vehicle position in the virtual space in accordance with the user's operation. A virtual space image generation unit (110) generates a virtual space image that represents a situation in a virtual space viewed from a viewpoint position in the virtual space that corresponds to the updated virtual vehicle position. A live-action image acquisition unit (112) acquires 1 or more live-action images specified on the basis of the updated virtual vehicle position from a live-action image storage unit (100). A composite image generation unit (114) generates a composite image of the generated virtual space image and the acquired 1 or more live-action images.

Description

Image generation system, image generation method, and program
Technical Field
The invention relates to an image generation system, an image generation method and a program.
Background
A railway simulator is known which can generate an image in which a so-called Computer Graphic (CG) image, such as an image of a real scene in an actual space and an image of a virtual space, is synthesized. As an example of such a railroad simulator, patent document 1 describes a railroad simulator capable of generating a composite image in which live-action moving image data of an actual scene viewed from a driver seat of a running train is synthesized with a 3-dimensional CG model.
In the technique described in patent document 1, distance information is calculated from the position on the screen of a feature point specified by inter-frame tracking (inter tracking) from live-action moving image data captured in advance and non-distance information of a virtual viewpoint in 3-dimensional information of the feature point. Then, a composite image in which the generated image of the 3-dimensional CG model and the reproduced image of the live-action moving image data are synthesized is generated based on the calculated distance information.
Documents of the prior art
Patent document
Patent document 1: japanese laid-open patent publication No. 2008-15576
Disclosure of Invention
Problems to be solved by the invention
However, in the technique described in patent document 1, when generating a composite image, various arithmetic processes such as specifying the position on the screen of the feature point, calculating the distance information, and generating an image of the CG model based on the distance information must be executed, and thus the processing load is heavy.
The present invention has been made in view of the above-described problems, and an object thereof is to provide an image generation system, an image generation method, and a program that can generate an image in which a captured real image in real space and an image in virtual space are combined with each other with a light processing load.
Means for solving the problems
The image generation system of the present invention includes: a real-scene image storage unit for storing a real-scene image captured by a camera installed on an actual railway vehicle moving in an actual space in association with a virtual vehicle position in a virtual space; a virtual vehicle position updating unit that updates the virtual vehicle position in the virtual space according to a user operation; a virtual space image generation unit that generates a virtual space image representing a situation in the virtual space viewed from a position of the viewpoint in the virtual space corresponding to the updated virtual vehicle position; a live-action image acquisition unit that acquires 1 or more live-action images specified based on the updated virtual vehicle position from the live-action image storage unit; and a composite image generating unit that generates a composite image of the generated virtual space image and the acquired 1 or more live-action images.
In one aspect of the present invention, the real image storage unit stores a combination of the plurality of real images captured by the plurality of cameras provided on the real railway vehicle in association with the virtual vehicle position; the virtual space image generating means generates the virtual space image indicating a situation of the virtual space viewed from a position of a plurality of viewpoints in the virtual space corresponding to each of the cameras, the plurality of viewpoints being associated with the updated virtual vehicle position; the live-action image acquisition means acquires 1 or a plurality of the combinations specified based on the updated virtual vehicle position from the live-action image storage means; the composite image generation unit generates: and a composite image of the virtual space image representing the virtual space viewed from the viewpoint position corresponding to the camera and 1 or more real-scene images captured by the camera included in the obtained 1 or more combinations, for each of the plurality of cameras.
In this aspect, the method may further include: a driver console for performing a simulated driving operation by a driver trainer; trainers can simulate railway vehicles up and down; a simulation platform disposed adjacent to the simulated railway vehicle; a front display panel arranged on the driver's console, for displaying a composite image of 1 or more live-action images captured by the forward-facing camera positioned in front of the actual railway vehicle and the virtual space image representing the virtual space viewed from the viewpoint position corresponding to the camera; and a display panel disposed in front of the simulation platform, and configured to display a composite image of 1 or more live-action images captured by the forward-facing camera located at a side of the actual railway vehicle and the virtual space image representing a situation in the virtual space viewed from a position corresponding to the viewpoint of the camera.
Or, the method may further include: trainers can simulate railway vehicles up and down; a simulation platform disposed adjacent to a side of the simulated railway vehicle; a display panel provided behind the simulated railway vehicle, the display panel displaying a composite image of 1 or more live-action images captured by the rearward facing camera positioned behind the actual railway vehicle and the virtual space image representing a situation of the virtual space viewed from a position corresponding to the viewpoint of the camera; and a display panel disposed in front of the simulation platform, and configured to display a composite image of 1 or more live-action images captured by the forward-facing camera located at a side of the actual railway vehicle and the virtual space image representing a situation in the virtual space viewed from a position corresponding to the viewpoint of the camera.
In one aspect of the present invention, the synthetic image generating unit repeatedly generates the synthetic image at a predetermined time interval in accordance with the user operation.
Further, an image generation method of the present invention includes: updating the virtual vehicle position in the virtual space according to the user operation; generating a virtual space image representing a situation in the virtual space viewed from a position of a viewpoint in the virtual space corresponding to the updated virtual vehicle position; acquiring 1 or more of the live-action images specified based on the updated virtual vehicle position from a live-action image storage means that stores live-action images captured by a camera installed on an actual railway vehicle moving in an actual space in association with the virtual vehicle position in the virtual space; and generating a composite image of the generated virtual space image and the acquired 1 or more live-action images.
Further, the program of the present invention causes a computer to execute the steps of: updating the virtual vehicle position in the virtual space according to the user operation; generating a virtual space image representing a situation of the virtual space viewed from a position of a viewpoint within the virtual space corresponding to the updated virtual vehicle position; acquiring 1 or more of the live-action images specified based on the updated virtual vehicle position from a live-action image storage means that stores live-action images captured by a camera installed on an actual railway vehicle moving in a real space in association with the virtual vehicle position in the virtual space; and generating a composite image of the generated virtual space image and the acquired 1 or more live-action images.
Drawings
Fig. 1 is an external perspective view showing an example of the overall configuration of a railway simulator system according to an embodiment of the present invention.
Fig. 2 is a diagram showing an example of a structure simulating a railway vehicle and a driver's station.
Fig. 3 is a diagram showing one example of a data structure of live-view image management data.
Fig. 4 is a diagram showing one example of the imaginary space.
Fig. 5 is an explanatory diagram for explaining an example of specifying the position and direction of the virtual railway vehicle object.
Fig. 6 is a diagram showing an example of a situation in which the virtual human figure object is located in front of the virtual railway vehicle object in the virtual space.
Fig. 7A is a diagram showing an example of a front virtual space image.
Fig. 7B is a diagram showing an example of a front live-view frame image.
Fig. 7C is a diagram illustrating an example of a front synthesized image.
Fig. 8 is a diagram showing an example of a situation in which the virtual human figure object is located on the right side of the virtual railway vehicle object in the virtual space.
Fig. 9 is a diagram showing an example of a front synthesized image.
Fig. 10A is a diagram showing an example of a right side virtual space image.
Fig. 10B is a diagram showing an example of a right side live-view frame image.
Fig. 10C is a diagram illustrating an example of a right side synthesized image.
Fig. 11 is a diagram showing an example of a situation in which the virtual human figure object is located on the right side of the virtual railway vehicle object in the virtual space.
Fig. 12A is a diagram illustrating an example of a right side virtual space image.
Fig. 12B is a diagram showing an example of a right side live-view frame image.
Fig. 12C is a diagram illustrating an example of a right side synthesized image.
Fig. 13 is a diagram showing an example of a situation in which the virtual human figure object is located behind the virtual railway vehicle object in the virtual space.
Fig. 14A is a diagram illustrating an example of a rear virtual space image.
Fig. 14B is a diagram illustrating an example of a rearward live-action frame image.
Fig. 14C is a diagram showing an example of the rear synthesized image.
Fig. 15 is a functional block diagram showing an example of functions of a server according to an embodiment of the present invention.
Fig. 16 is a flowchart showing an example of a flow of processing performed by the server according to the embodiment of the present invention.
Detailed Description
Hereinafter, one embodiment of the present invention will be described in detail with reference to the drawings.
Fig. 1 is an external perspective view showing an example of the overall configuration of a railway simulator system 1 according to an embodiment of the present invention.
The railway simulator system 1 illustrated in fig. 1 is a system for simulating a vehicle (actual railway vehicle) of an actual railway moving in an actual space for training the business of a trainee or a driver. Hereinafter, a person who performs business training of a trainee is referred to as a trainee, and a person who performs business training of a driver is referred to as a driver trainer. The application of the railway simulator system 1 illustrated in fig. 1 is not limited to the training of the business of a train driver or a driver. The railway simulator system 1 may also be used for example for simulated experiences of trainers, drivers, etc.
As shown in fig. 1, the railway simulator system 1 of the present embodiment includes, as main components, a simulated railway vehicle 10, a driver console 20, a trainee station 30, and a driver station 40.
The simulated railway vehicle 10 simulates a part of the rear of an actual railway vehicle. The trainee can simulate the rolling stock 10 up and down. A simulated train crew compartment 11 is provided in the simulated railway vehicle 10, and a rear display panel P1 is provided behind the simulated train crew compartment 11.
A left simulation platform 12 is provided adjacent to the left side surface of the simulated railway vehicle 10. A left side display panel P2 is provided in front of the left side simulation platform 12. Further, a right side dummy platform 13 is provided adjacent to the right side surface of the dummy railway vehicle 10. A right side display panel P3 is provided in front of the right side dummy stage 13.
The operator console 20 simulates an operator console that exists in the cab of an actual railway vehicle. In the driver console 20, a driver trainer performs a simulated driving operation. The driver console 20 is provided with a travel command unit 21 including, for example, a handle (lever). Further, a front display panel P4 is provided in front of the driver console 20.
The simulated trainee room 11 and the driver console 20 may be provided with voice communicators for allowing the driver trainee and the trainee to communicate with each other.
The simulated railway vehicle 10 and the driver's console 20 need not be provided in the same space, and may be provided in adjacent rooms separated by partitions, for example. The simulated railway vehicle 10 and the driver's console 20 may be installed at a place remote from each other. Thus, even when the trainee of train and the trainee of driver are far apart, they can perform training in cooperation with each other.
The trainee at the trainee station 30 monitors the trainee through a trainee. The trainee's station 30 is provided with a plurality of display panels, etc., and the trainee can monitor the training status of the trainee by confirming these display panels.
The driver trainer performs monitoring of the driver trainer and the like at the driver trainer station 40. The driver coaching station 40 is provided with a plurality of display panels and the like, and the driver coaching can monitor the training status of the driver trainer by confirming these display panels.
Fig. 2 is a diagram showing an example of a configuration of the simulated railway vehicle 10 and the driver's console 20. As shown in fig. 2, the simulated railway vehicle 10 according to the present embodiment includes a server 50. The driver console 20 of the present embodiment includes a server 52.
The server 50 includes, for example, a processor 50a, a storage unit 50b, and a communication unit 50 c. The processor 50a is a program control device such as a CPU that operates according to a program installed in the server 50, for example. The storage unit 50b is a storage unit such as a ROM or a RAM, a hard disk drive, or the like. The storage unit 50b stores a program or the like executed by the processor 50 a. The communication unit 50c is a communication interface of a network board or the like for transmitting and receiving data to and from the server 52. The server 50 transmits and receives information to and from the server 52 via the communication unit 50 c.
The server 52 includes, for example, a processor 52a, a storage unit 52b, and a communication unit 52 c. The processor 52a is a program control device such as a CPU that operates according to a program installed in the server 52, for example. The storage unit 52b is a storage unit such as a ROM or a RAM, a hard disk drive, or the like. The storage unit 52b stores a program or the like executed by the processor 52 a. The communication unit 52c is a communication interface of a network board or the like for transmitting and receiving data to and from the server 50. The server 52 transmits and receives information to and from the server 50 via the communication unit 52 c.
In the present embodiment, while an actual railway vehicle is traveling on a railway line, which is an actual space to be trained using the railway simulator system 1, a moving image for training is captured in advance by a camera provided on the actual railway vehicle.
Here, for example, a moving image is captured by 4 cameras provided on an actual railway vehicle. Hereinafter, these 4 cameras are referred to as a rear real camera, a left real camera, a right real camera, and a front real camera, respectively. Here, the rear real camera is, for example, a rearward camera that is located rearward of the vehicle at the rearmost end of the real railway vehicle and that photographs the rear from the trainee's room. The left actual camera is, for example, a camera that is positioned on the left side of the trainee compartment of the vehicle at the rearmost end of the actual railway vehicle, faces forward, and captures an image of the front from the left side of the trainee compartment. The right-side real camera is, for example, a camera that is located at the right side of the trainee compartment of the vehicle at the rearmost end of the real railway vehicle, faces forward, and captures an image of the front from the right side of the trainee compartment. The front real camera is, for example, a camera that is positioned in front of the vehicle at the front end of the real railway vehicle and faces forward, and that captures an image of the front from the driver's seat.
When moving images are captured by the 4 cameras installed on the actual railway vehicle as described above, 4 moving images including a plurality of frame images are generated. Hereinafter, a moving image generated by a camera provided on an actual railway vehicle is referred to as a live-action moving image. For example, the live-action moving images generated by the rear real camera, the left real camera, the right real camera, and the front real camera are referred to as a rear live-action moving image, a left live-action moving image, a right live-action moving image, and a front live-action moving image, respectively.
In the present embodiment, the rearward live-view moving image, the left live-view moving image, the right live-view moving image, and the forward live-view moving image are synchronized at the time of shooting. The frame image of the rear live-action video, the frame image of the left live-action video, the frame image of the right live-action video, and the frame image of the front live-action video, which are captured at a certain time, are set to have the same frame number.
In the present embodiment, the position of the actual railway vehicle at the time of capturing the live-action image, which is a frame image included in the live-action moving image, is specified in advance. Hereinafter, the position of the actual railway vehicle specified in this way is referred to as an actual vehicle position. Here, for example, the actual vehicle position at the time of capturing each frame image included in the live-action moving image may be specified by using a GPS module or the like when capturing each frame image. Here, for example, the position of a point representing the actual railway vehicle, such as the position of the front actual camera or the position of the center of the entire actual railway vehicle composed of a plurality of vehicles, may be specified as the actual vehicle position.
In the present embodiment, the live-action moving image captured as described above, the actual vehicle position at the time of capturing the live-action moving image, and the like are managed by the live-action image management data (see fig. 3). The live-action image management data may be stored in the storage unit 50b of the server 50, for example.
Fig. 3 is a diagram showing an example of the data structure of live-action image management data according to the present embodiment. As shown in fig. 3, the live-view image management data includes a live-view image setting ID, a rear live-view frame image, a left live-view frame image, a right live-view frame image, a front live-view frame image, actual vehicle position data, and virtual vehicle position data. The live-action image management data is data corresponding to 1 frame image included in the live-action moving image. Therefore, for example, the storage unit 50b of the server 50 stores live-view image management data of the number of frame images included in the live-view moving image (the number of frames of the live-view moving image).
The live-view image setting ID is identification information of the live-view image management data. Here, for example, the frame number may be set to the value of the live-view image setting ID.
The live-action image of the frame image corresponding to the live-action image setting ID included in the live-action image management data is set as the rear live-action frame image, the left-side live-action frame image, the right-side live-action frame image, and the front live-action frame image included in the live-action image management data. Here, for example, a live view image of a frame image included in the rear live view moving image is set as the rear live view frame image. Further, the live-view image of the frame image included in the left live-view moving image is set as the left live-view frame image. Further, a live-view image of a frame image included in the right live-view moving image is set as a right live-view frame image. Further, a live-view image of a frame image included in the front live-view moving image is set as the front live-view frame image. As described above, the live-action image management data of the present embodiment may include a combination of a plurality of live-action images.
The value of the actual vehicle position data included in the live view image management data is set to, for example, a value indicating the actual vehicle position at the time of capturing a frame image corresponding to the live view image management data. Here, as described above, the value indicating the position of the point representing the actual vehicle may be set as the value of the actual vehicle position data. Further, the value of the actual vehicle position data may also be expressed by a combination of latitude and longitude, for example.
The value of the virtual vehicle position data included in the live view image management data is set to a value indicating a position in the virtual space 60 as illustrated in fig. 4 corresponding to the actual vehicle position indicated by the actual vehicle position data included in the live view image management data. Hereinafter, the position in the virtual space 60 corresponding to the actual vehicle position is referred to as a virtual vehicle position. Here, the value of the virtual vehicle position data may be expressed by a combination of an X coordinate value, a Y coordinate value, and a Z coordinate value, for example.
Fig. 4 is a diagram showing an example of the virtual space 60 of the present embodiment. The imaginary space 60 shown in fig. 4 is a 3-dimensional imaginary space. As shown in fig. 4, a virtual railway vehicle object 62 formed of a plurality of polygons is disposed in the virtual space 60 according to the present embodiment as a virtual object corresponding to an actual railway vehicle. Here, the virtual railway vehicle object 62 according to the present embodiment is set as a virtual object that is not transparent and visible.
Here, for example, when 6 actual railway vehicles are compiled, 6 virtual railway vehicle objects 62(62a, 62b, 62c, 62d, 62e, and 62f) connected to each other are arranged in the virtual space 60. The virtual railway vehicle object 62a corresponds to, for example, a vehicle at the front end of an actual railway vehicle. The virtual railway vehicle object 62b, the virtual railway vehicle object 62c, the virtual railway vehicle object 62d, and the virtual railway vehicle object 62e correspond to, for example, the 2 nd, 3 rd, 4 th, and 5 th vehicles from the front end of the actual railway vehicle, respectively. The virtual railway vehicle object 62f corresponds to, for example, a rearmost vehicle of an actual railway vehicle.
In the present embodiment, as described above, the actual vehicle position in the actual space corresponds to the virtual vehicle position 1 in the virtual space 60 by 1. Here, in the present embodiment, a track corresponding to the track of the actual railway vehicle in the actual space when the live-action moving image is captured is set in advance in the virtual space 60. In the present embodiment, as shown in fig. 5, the positions and directions of the 6 virtual railroad vehicle objects 62 are uniquely specified based on the virtual vehicle positions and the tracks.
Fig. 5 is an explanatory view explaining an example of designation of the position and direction of the virtual railway vehicle object based on the virtual vehicle position and the track. In fig. 5, the track of the virtual railway vehicle object 62 gently curved rightward is indicated by a two-dot chain line. In fig. 5, the virtual railway vehicle object 62 is assumed to travel from the bottom to the top in fig. 5 along the track.
Here, for example, the actual vehicle position is set as the position of the front actual camera. The position a1 of the center of the front surface of the virtual railroad vehicle object 62a is set as a virtual vehicle position. In this case, a position a2 of an intersection of a circle having a radius length d and centered on the position a1 and the track is specified, which is located rearward of the position a1 with respect to the traveling direction of the virtual railway vehicle object 62. Similarly, a position A3 of an intersection of a circle having a radius length d and centered on the position a2 and the trajectory, which is located rearward of the position a2 with respect to the traveling direction, is specified. Further, a position a4 of an intersection of a circle having a radius length d and centered on the position A3 and the track is specified, which is located rearward of the position A3 with respect to the traveling direction. Further, a position a5 of an intersection of a circle having a radius length d and centered on the position a4 and the track is specified, which is located rearward of the position a4 with respect to the traveling direction. Further, a position a6 of an intersection of a circle having a radius length d and centered on the position a5 and the track is specified, which is located rearward of the position a5 with respect to the traveling direction. Here, the length d of the radius is not particularly limited, and may be, for example, the length of 1 virtual railway vehicle object 62, or the length obtained by adding a margin to the length of 1 virtual railway vehicle object 62.
Then, the virtual railroad vehicle object 62a is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62a is located at the position a1 and the normal direction of the front surface is the tangential direction B1 toward the track at the position a 1. The virtual railroad vehicle object 62B is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62B is located at a position a2 and the normal direction of the front surface is a tangential direction B2 toward the track at a position a 2. The virtual railroad vehicle object 62c is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62c is located at a position A3 and the normal direction of the front surface is a tangential direction B3 toward the track at a position A3. The virtual railroad vehicle object 62d is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62d is located at the position a4 and the normal direction of the front surface is directed to the tangential direction B4 of the track at the position a 4. The virtual railroad vehicle object 62e is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62e is located at the position a5 and the normal direction of the front surface is directed to the tangential direction B5 of the track at the position a 5. The virtual railroad vehicle object 62a is disposed in the virtual space 60 such that the center of the front surface of the virtual railroad vehicle object 62f is located at the position a6 and the normal direction of the front surface at the position a6 is directed toward the tangential direction B6 of the track. In this way, the positions and directions of the 6 virtual railway vehicle objects 62 corresponding to the virtual vehicle positions can be specified.
In the virtual space 60, the position of a viewpoint (virtual camera) 64 and the direction of the line of sight (line of sight direction 66) from the viewpoint 64 are set, the positions corresponding to cameras installed on the actual railway vehicle, respectively.
Fig. 4 shows the positions of the viewpoint 64a, the viewpoint 64b, the viewpoint 64c, and the viewpoint 64 d. Also, in fig. 4, a line-of-sight direction 66a as a line-of-sight direction in the viewpoint 64a, a line-of-sight direction 66b as a line-of-sight direction in the viewpoint 64b, a line-of-sight direction 66c as a line-of-sight direction in the viewpoint 64c, and a line-of-sight direction 66d as a line-of-sight direction in the viewpoint 64d are shown.
The position of the viewpoint 64a, the position of the viewpoint 64b, the position of the viewpoint 64c, and the position of the viewpoint 64d in the virtual space 60 shown in fig. 4 are marked in correspondence with the positions in the real spaces of the rear real camera, the left real camera, the right real camera, and the front real camera, respectively. The line-of-sight direction 66a, the line-of-sight direction 66b, the line-of-sight direction 66c, and the line-of-sight direction 66d shown in fig. 4 correspond to the imaging directions in the real spaces of the rear real camera, the left real camera, the right real camera, and the front real camera, respectively.
Therefore, in the present embodiment, as shown in fig. 4, for example, the viewpoint 64a is arranged behind the virtual railway vehicle object 62f and the line-of-sight direction 66a is directed rearward. For example, the viewpoint 64b is disposed on the left side of the virtual railroad vehicle object 62f, and the line-of-sight direction 66b is directed forward. For example, the viewpoint 64c is arranged on the right side of the virtual railroad vehicle object 62f, and the line-of-sight direction 66c is directed forward. For example, the viewpoint 64d is arranged in front of the virtual railroad vehicle object 62a, and the line-of-sight direction 66d is directed forward.
In the present embodiment, for example, the viewpoint 64a, the viewpoint 64b, and the viewpoint 64c are fixed at positions facing the position of the virtual railway vehicle object 62 f. The line of sight direction 66a, the line of sight direction 66b, and the line of sight direction 66c are fixed in directions facing the virtual railroad vehicle object 62 f. The viewpoint 64d is fixed at a position opposite to the position of the virtual railway vehicle object 62 a. The line-of-sight direction 66d is fixed in a direction opposite to the direction of the virtual railway vehicle object 62 a.
In the present embodiment, as described above, the positions and directions of the 6 virtual railroad vehicle objects 62 are uniquely specified based on the virtual vehicle positions. Therefore, as shown in fig. 5, the position of the viewpoint 64a, the position of the viewpoint 64b, the position of the viewpoint 64c, the position of the viewpoint 64d, the line-of-sight direction 66a, the line-of-sight direction 66b, the line-of-sight direction 66c, and the line-of-sight direction 66d are also uniquely specified based on the virtual vehicle position (for example, the position a1 in fig. 5).
As described later, virtual objects other than the virtual railway vehicle object 62, such as a virtual human object 68 (see fig. 6, 8, 11, and 13) composed of a plurality of polygons, may be disposed in the virtual space 60.
In the present embodiment, during training using the railway simulator system 1, the actual vehicle position in the actual railway vehicle simulated in the railway simulator system 1 is changed in accordance with an operation of the travel command unit 21 by a user such as a driver trainer. Then, the virtual vehicle position is changed in accordance with the change in the actual vehicle position, and the position and direction of the virtual railway vehicle object 62 in the virtual space 60 are changed in accordance with the change in the virtual vehicle position. In the present embodiment, the position and direction of the virtual railroad vehicle object 62 are changed along the track set in the virtual space 60 according to the change in the virtual vehicle position.
In the present embodiment, a combination of 4 Frame images is generated at a predetermined Frame rate (for example, at 1/60-second intervals) during training using the railway simulator system 1. Then, the frame images included in the generated combination are displayed on the rear display panel P1, the left display panel P2, the right display panel P3, and the front display panel P4, respectively. Thus, in the present embodiment, during the training, the frame images displayed on the rear display panel P1, the left display panel P2, the right display panel P3, and the front display panel P4 are updated at the specified frame rate. In the present embodiment, during the training period, the rear display panel P1, the left side display panel P2, the right side display panel P3, and the front display panel P4 display moving images.
Here, in the present embodiment, for example, a moving image including a composite image in which the rear real-scene frame image and the virtual space image showing the viewing direction 66a viewed from the viewing point 64a in the virtual space 60 are combined as frame images is displayed on the rear display panel P1. Hereinafter, the virtual space image is referred to as a rear virtual space image, and the synthesized image is referred to as a rear synthesized image.
On the left display panel P2, a moving image including a composite image in which the left real-scene frame image and the virtual space image showing the viewing direction 66b viewed from the viewing point 64b in the virtual space 60 are combined as frame images is displayed. Hereinafter, the virtual space image is referred to as a left virtual space image, and the synthesized image is referred to as a left synthesized image.
Further, on the right side display panel P3, a moving image including a composite image in which the right side real-scene frame image and the virtual space image showing the viewing direction 66c viewed from the viewing point 64c in the virtual space 60 are combined as a frame image is displayed. Hereinafter, the virtual space image is referred to as a right virtual space image, and the synthesized image is referred to as a right synthesized image.
On the front display panel P4, a moving image including a composite image in which the front real-scene frame image and the virtual space image showing the viewing direction 66d viewed from the viewing point 64d in the virtual space 60 are combined as frame images is displayed. Hereinafter, the virtual space image is referred to as a front virtual space image, and the synthesized image is referred to as a front synthesized image.
The virtual space image according to the present embodiment is an image of Computer Graphics (CG) in which the virtual space 60 is viewed as a rendered image.
An example of generating a composite image will be described below with reference to fig. 6 to 10C.
For example, at a certain time point (time point t1), as shown in fig. 6, the virtual human object 68 is located in front of the virtual railway vehicle object 62a in the virtual space 60. In fig. 6, the track of the virtual railway vehicle object 62 is indicated by a two-dot chain line. Here, for example, when capturing a live-action video corresponding to the virtual space 60 shown in fig. 6, the actual railway vehicle gradually enters 1021 the curved track in the actual space. In this case, as shown in fig. 6, the track of the virtual railway vehicle object 62 in the virtual space 60 is a track gently curved rightward.
Then, the position of the viewpoint 64a, the position of the viewpoint 64b, the position of the viewpoint 64c, the position of the viewpoint 64d, the line-of-sight direction 66a, the line-of-sight direction 66b, the line-of-sight direction 66c, and the line-of-sight direction 66d are specified based on the virtual vehicle position in the situation. Then, a combination of the rear virtual space image, the left virtual space image, the right virtual space image, and the front virtual space image is generated based on the specified position of the viewpoint 64 and the viewing direction 66. Fig. 7A shows an example of a front virtual space image 70 generated in this situation.
Then, live-action image management data is acquired in which the virtual vehicle position in the situation is set to the value of the virtual vehicle position data. Then, a combination of the rear live-view frame image, the left live-view frame image, the right live-view frame image, and the front live-view frame image included in the live-view image management data is acquired. Fig. 7B shows an example of the front live-view frame image 72 included in the combination obtained in this situation.
Then, the generated rear virtual space image is combined with the rear live view frame image included in the acquired combination to generate a rear combined image. Further, the left composite image is generated by combining the generated left virtual space image and the left real-world frame image included in the acquired combination. Further, the right synthetic image is generated by synthesizing the generated right virtual space image and the right real-time frame image included in the acquired combination. Further, the generated front virtual space image is combined with the front real-scene frame image included in the acquired combination to generate a front combined image. Fig. 7C shows an example of generating a front composite image 74 by compositing the front virtual space image 70 shown in fig. 7A and the front real-scene frame image 72 shown in fig. 7B.
It is assumed that there is no real image management data in which the virtual vehicle position in this situation is set to the value of the virtual vehicle position data. In this case, the plurality of specified real-image management data may be acquired based on the virtual vehicle position in the situation. Then, a composite image in which an image obtained by interpolating the live-view images included in the plurality of live-view image management data and the virtual space image are combined may be generated.
For example, live-action image management data including virtual vehicle position data indicating the virtual vehicle position in the situation, which are respectively located in front and rear of the track and closest to the virtual vehicle position, may be acquired. Then, a rear composite image can be generated by combining an image obtained by interpolating rear live view frame images included in each of the 2 pieces of live view image management data acquired in this manner with the generated rear virtual space image. Further, the left-side synthesized image may be generated by synthesizing an image obtained by interpolating the left-side real-image frame images included in each of the acquired 2 pieces of real-image management data with the generated left-side virtual space image. Further, the right-side synthesized image may be generated by synthesizing an image obtained by interpolating the right-side real-image frame images included in each of the acquired 2 pieces of real-image management data with the generated right-side virtual space image. Further, the front composite image may be generated by compositing an image obtained by interpolating the front live-view frame images included in each of the acquired 2 pieces of live-view image management data with the generated front virtual space image.
Then, the rear, left, right, and front synthetic images thus generated are displayed on the rear display panel P1, the left display panel P2, the right display panel P3, and the front display panel P4, respectively.
Thereafter, at a time point t2 after the time point t1, the virtual railway vehicle object 62 is moved to the position shown in fig. 8. In the situation shown in fig. 8, the position of the virtual human figure object 68 in the virtual space 60 does not change, but the virtual railway vehicle object 62 moves, so that the virtual human figure object 68 is positioned on the right side of the virtual railway vehicle object 62 f.
Fig. 9 shows an example of the front synthetic image 76 generated in the situation of the time point t 2. As shown in fig. 8, in the situation at the time point t2, the virtual human object 68 does not enter the visual field range of the viewpoint 64 d. Therefore, the image of the virtual human object 68 is not included in the front composite image 76 shown in fig. 9.
Fig. 10A shows an example of the right-side virtual space image 78 generated in the situation of the time point t 2. Fig. 10B shows an example of the right-side live-view frame image 80 included in the combination taken in the situation of the time point t 2. In the situation at the time point t2, the virtual human object 68 enters the visual field range of the viewpoint 64 c. Therefore, the right side virtual space image 78 shown in fig. 10A includes the image of the virtual human object 68. Then, a right-side synthetic image 82 shown in fig. 10C is generated by synthesizing the right-side virtual space image 78 shown in fig. 10A and the right-side real-image frame image 80 shown in fig. 10B. The right-side synthesized image 82 generated in this manner is displayed on the right-side display panel P3.
As described above, in the present embodiment, the image of the virtual human object 68 displayed on the front display panel P4 at the time point t1 can be displayed on the right side display panel P3 and not on the front display panel P4 at the time point t 2.
Another example of generating a composite image is described below with reference to fig. 11 to 14C.
For example, at a certain time point (time point t3), as shown in fig. 11, the virtual human object 68 is positioned to the right of the virtual railway vehicle object 62e in the virtual space 60. In fig. 11, the track of the virtual railway vehicle object 62 is indicated by a two-dot chain line. Here, for example, when capturing a live-action moving image corresponding to the virtual space 60 shown in fig. 11, the actual railway vehicle travels on a track that curves gradually to the right in the actual space. In this case, as shown in fig. 11, the track in the virtual space 60 of the virtual railway vehicle object 62 is a track gently curved rightward.
Fig. 12A shows an example of the right side virtual space image 84 generated in the situation of the time point t 3. Fig. 12B shows an example of the right-side live-view frame image 86 included in the combination taken in the situation of the time point t 3. As shown in fig. 11, in the situation at the time point t3, the virtual human object 68 enters the visual field range of the viewpoint 64 c. Therefore, the right virtual space image 84 shown in fig. 12A is an image including the virtual human object 68. Then, a right synthetic image 88 shown in fig. 12C is generated by synthesizing the right virtual space image 84 shown in fig. 12A and the right real-image frame image 86 shown in fig. 12B. The right-side synthesized image 88 thus generated is displayed on the right-side display panel P3.
Then, at a time point t4 after the time point t3, the virtual railway vehicle object 62 moves to the position shown in fig. 13. In the situation shown in fig. 13, the position of the virtual human object 68 in the virtual space 60 does not change with the time t3, but the virtual human object 68 is located behind the virtual railway vehicle object 62f because the virtual railway vehicle object 62 moves.
Fig. 14A shows an example of the rear virtual space image 90 generated in the condition of the time point t 4. Fig. 14B shows an example of the rear live-view frame image 92 included in the combination taken in the situation of the time point t 4. In the situation at the time point t4, the virtual human object 68 does not enter the visual field range of the viewpoint 64c, but the virtual human object 68 enters the visual field range of the viewpoint 64 a. Therefore, the rear virtual space image 90 shown in fig. 14A includes the image of the virtual human object 68. Then, a rear synthetic image 94 shown in fig. 14C is generated by synthesizing the rear virtual space image 90 shown in fig. 14A and the rear live-action frame image 92 shown in fig. 14B. The rear synthetic image 94 thus generated is displayed on the rear display panel P1.
As described above, in the present embodiment, the image of the virtual human object 68 displayed on the right side display panel P3 at the time point t3 can be displayed on the rear display panel P1 and not on the right side display panel P3 at the time point t 4.
In the present embodiment, when generating the composite image, there is no need to perform a manual operation for superimposing CG on the live view image. In the present embodiment, when generating the composite image, it is not necessary to perform a heavy load process such as an arithmetic process for specifying a position where Computer Graphics (CG) are arranged by image analysis of the live view image. Therefore, according to the present embodiment, an image composed of the real image and the virtual space image can be generated with a light processing load without requiring a lot of time.
In the present embodiment, the virtual objects such as the virtual human object 68 are arranged in the virtual space 60 by modeling, thereby specifying the arrangement of the images of the virtual objects with respect to all the images displayed on the plurality of display panels. Therefore, according to the present embodiment, it is possible to easily display moving images including images synthesized from live-action images and virtual space images on a plurality of display panels in a mutually integrated state.
In addition, a live view image of the real space captured by the camera may be mapped as a texture on the surface of the polygon constituting the virtual human object 68. In this way, a composite image closer to the real scene can be generated.
The functions of the server 50 of the present embodiment and the processing executed by the server 50 of the present embodiment will be further described below.
Fig. 15 is a functional block diagram showing an example of the implementation function in the server 50 according to the present embodiment. In addition, the server 50 of the present embodiment does not need to implement all the functions shown in fig. 15, and may implement functions other than the functions shown in fig. 15.
As shown in fig. 15, the server 50 of the present embodiment functionally includes, for example, a real image storage unit 100, a virtual space data storage unit 102, a travel signal receiving unit 104, a virtual vehicle position updating unit 106, a viewpoint position specifying unit 108, a virtual space image generating unit 110, a real image acquiring unit 112, a synthetic image generating unit 114, a synthetic image output unit 116, and a synthetic image transmitting unit 118. The live-action image storage unit 100 and the virtual space data storage unit 102 mainly house the storage unit 50 b. The traveling signal receiving unit 104 and the composite image transmitting unit 118 mainly implement the communication unit 50 c. The virtual vehicle position updating unit 106, the viewpoint position specifying unit 108, the virtual space image generating unit 110, the live view image acquiring unit 112, and the synthetic image generating unit 114 mainly include the mounting processor 50 a. The composite image output unit 116 mainly mounts the processor 50a, the rear display panel P1, the left side display panel P2, and the right side display panel P3. The server 50 is a task of the image generation system for a railroad simulator that generates the composite image according to the present embodiment.
The above functions may be implemented by the processor 50a executing a program installed in the server 50 of the computer and including instructions corresponding to the above functions. The program may be supplied to the server 50 via a computer-readable information storage medium such as an optical disk, a magnetic tape, a magneto-optical disk, a flash memory, or via the internet or the like.
In the present embodiment, the real-image storage unit 100 stores, for example, a real-image captured by a camera provided on an actual railway vehicle moving in an actual space, in association with a virtual vehicle position in the virtual space 60. Here, the real-image storage unit 100 may store a combination of a plurality of real-images captured by a plurality of cameras provided on the actual railway vehicle in association with the virtual vehicle position in the virtual space 60. For example, as described above, the live-view image storage unit 100 may store the above-described live-view image management data including a combination of a plurality of live-view images captured at the same time.
In the present embodiment, the virtual space data storage unit 102 stores data indicating the position and direction of each virtual object in the virtual space 60, for example. The virtual space data storage unit 102 may store data of polygons representing virtual objects, texture images mapped to the virtual objects, and the like. The virtual space data storage unit 102 may store data indicating a track set in the virtual space 60. The virtual space data storage unit 102 may store data indicating the position and direction of the virtual railway vehicle object 62 with respect to the virtual vehicle position. The virtual space data storage unit 102 may store data indicating relative positions of the viewpoint 64a, the viewpoint 64b, and the viewpoint 64c with respect to the position of the virtual railway vehicle object 62 f. The virtual space data storage unit 102 may store data indicating the relative direction of the line of sight direction 66a, the line of sight direction 66b, and the line of sight direction 66C with respect to the direction of the virtual railcar object 62 f. The virtual space data storage unit 102 may store data indicating the relative position of the viewpoint 64d with respect to the position of the virtual railway vehicle object 62 a. The virtual space data storage unit 102 may store data indicating a relative direction of the visual line direction 66d with respect to the virtual railway vehicle object 62a direction.
In the present embodiment, the travel signal receiving unit 104 receives a travel signal corresponding to an operation input to the travel command unit 21, for example. Here, for example, the travel command unit 21 may output the travel signal to the server 52 at a specified sampling rate, and the server 52 may transmit the travel signal to the server 50. Here, the time interval corresponding to the sampling rate may be the same as the time interval corresponding to the frame rate.
In the present embodiment, the virtual vehicle position updating unit 106 updates the virtual vehicle position in the virtual space 60, for example, in response to a user operation. Here, for example, the virtual vehicle position updating unit 106 may update the virtual vehicle position in the virtual space 60 based on the travel signal corresponding to the operation input received by the travel signal receiving unit 104.
In the present embodiment, the viewpoint position specification unit 108 specifies the position of the viewpoint 64 corresponding to the updated virtual vehicle position, for example. Here, for example, the viewpoint position specifying unit 108 may specify the position and direction of the virtual railway vehicle object 62 based on the virtual vehicle position updated by the virtual vehicle position updating unit 106. The viewpoint position specification unit 108 may specify the position of the viewpoint 64a, the position of the viewpoint 64b, the position of the viewpoint 64c, and the position of the viewpoint 64d based on the specified position and direction of the virtual railway vehicle object 62. For example, the viewpoint position specification unit 108 may specify the line-of-sight direction 66a, the line-of-sight direction 66b, the line-of-sight direction 66c, and the line-of-sight direction 66d based on the position and the direction of the specified virtual railway vehicle object 62.
In the present embodiment, the virtual space image generation unit 110 generates a virtual space image representing, for example, a situation in which the virtual space 60 is viewed from the position of the viewpoint 64. For example, a virtual space image is generated which shows the situation of the virtual space 60 seen from the position of the viewpoint 64 corresponding to the updated virtual vehicle position. Here, for example, a combination of the rear virtual space image, the left virtual space image, the right virtual space image, and the front virtual space image may be generated as described above.
In the present embodiment, the live-action image acquisition unit 112 acquires, for example, 1 or more live-action images specified based on the position of the viewpoint 64 from the live-action image storage unit 100. For example, 1 or more live-action images specified based on the position of the viewpoint 64 corresponding to the updated virtual vehicle position are acquired from the live-action image storage unit 100. Here, for example, the live-view image acquisition unit 112 may acquire 1 or more pieces of live-view image management data specified based on the virtual vehicle position in the virtual space 60 from the live-view image storage unit 100 as described above.
In the present embodiment, the synthetic image generating unit 114 generates a synthetic image of the virtual space image generated by the virtual space image generating unit 110 and 1 or more real image acquired by the real image acquiring unit 112, for example. For example, a composite image may be generated which represents a virtual space image of the virtual space 60 viewed from the position of the viewpoint 64 corresponding to each of the plurality of cameras and 1 or more real images captured by the camera included in the acquired real image management data. Here, for example, as described above, a combination of the rear synthetic image, the left synthetic image, the right synthetic image, and the front synthetic image may be generated. As described above, the composite image generating unit 114 may generate an image obtained by interpolating a plurality of live-view images acquired by the live-view image acquiring unit 112. The synthetic image generator 114 may generate a synthetic image of the virtual space image generated by the virtual space image generator 110 and the interpolated image.
In the present embodiment, the synthetic image output unit 116 outputs the synthetic image generated by the synthetic image generation unit 114 to a display panel, for example. Here, for example, the synthetic image output unit 116 may output the rear synthetic image generated by the synthetic image generation unit 114 to the rear display panel P1. At this time, the rear display panel P1 may display the rear composite image. For example, the synthetic image output unit 116 may output the left synthetic image generated by the synthetic image generating unit 114 to the left display panel P2. At this time, the left display panel P2 may display the left composite image. For example, the synthetic image output unit 116 may output the right synthetic image generated by the synthetic image generation unit 114 to the right display panel P3. At this time, the right side display panel P3 may display the right side composite image.
In the present embodiment, the synthetic image transmitting unit 118 transmits the synthetic image generated by the synthetic image generating unit 114 to the server 52, for example. Here, the synthetic image transmitting unit 118 may transmit the front synthetic image generated by the synthetic image generating unit 114 to the server 52. The server 52 that has received the front composite image may output the front composite image to the front display panel P4. At this time, the front display panel P4 may display the front composite image.
In the present embodiment, for example, the virtual vehicle position update unit 106 may change the position or posture of the virtual human figure object 68 based on the position of the virtual human figure object 68 arranged in the virtual space 60 and the virtual vehicle position. Specifically, for example, the position or posture of the virtual character object 68 may be controlled to make such an action of the character in contact with the vehicle. Then, the virtual space image generation unit 110 may generate a virtual space image including an image of the virtual human object 68 whose position or orientation has changed in this manner.
An example of a flow of the process repeated at a predetermined frame rate in the server 50 according to the present embodiment will be described below with reference to a flowchart illustrated in fig. 16.
First, the travel signal receiving unit 104 receives a travel signal corresponding to an operation input to the travel command unit 21 from the server 52 (S101).
Then, the virtual vehicle position update unit 106 specifies the actual vehicle position and the virtual vehicle position in the frame based on the travel signal received in the processing shown in S101 and the actual vehicle position and the virtual vehicle position in the previous frame (S102).
Then, the viewpoint position specification unit 108 specifies the position and direction of the virtual railway vehicle object 62 based on the virtual vehicle position determined by the processing shown in S102 (S103).
Then, the viewpoint position specification unit 108 specifies the position of the viewpoint 64 and the line-of-sight direction 66 based on the position and the direction of the virtual railway vehicle object 62 specified by the processing shown in S103 (S104). Here, for example, the position of the viewpoint 64a, the position of the viewpoint 64b, the position of the viewpoint 64c, the position of the viewpoint 64d, the viewing direction 66a, the viewing direction 66b, the viewing direction 66c, and the viewing direction 66d may be specified.
Then, the virtual space image generation unit 110 generates a virtual space image based on the position of the viewpoint 64 and the viewing direction 66 specified by the processing shown in S104 (S105). Here, for example, as described above, a combination of the rear virtual space image, the left virtual space image, the right virtual space image, and the front virtual space image may be generated.
Then, the live-action image acquisition unit 112 acquires 1 or more pieces of live-action image management data including virtual vehicle position data indicating the virtual vehicle position specified by the processing shown in S102 (S106).
Then, the synthetic image generating unit 114 generates a synthetic image based on the real image included in the live-view image management data acquired by the processing shown in S106 and the virtual space image generated by the processing shown in S105 (S107).
Here, for example, as described above, the rear composite image may be generated based on the rear live-view frame image included in the live-view image management data acquired by the processing shown in S106 and the rear virtual space image generated by the processing shown in S105. For example, the left composite image may be generated based on the left live-view frame image included in the live-view image management data acquired by the processing shown in S106 and the left virtual space image generated by the processing shown in S105. For example, the right composite image may be generated based on the right live-view frame image included in the live-view image management data acquired in the process shown in S106 and the right virtual spatial image generated in the process shown in S105. Further, for example, the front composite image may be generated based on the front live-view frame image included in the live-view image management data acquired by the processing shown in S106 and the front virtual space image generated by the processing shown in S105.
Then, the output of the composite image by the composite image output unit 116 and the transmission of the composite image by the composite image transmission unit 118 are executed (S108). Here, for example, the synthesized image output unit 116 may output the rear synthesized image to the rear display panel P1, the left synthesized image to the left display panel P2, and the right synthesized image to the right display panel P3. At this time, as described above, the rear display panel P1 displays the rear composite image, the left side display panel P2 displays the left side composite image, the right side display panel P3 displays the right side composite image, and the front display panel P4 displays the front composite image. The composite image transmitting unit 118 may transmit the front composite image to the server 52. The server 52 may output the front composite image to the front display panel P4. At this time, the front display panel P4 displays the front composite image as described above. And then returns to the processing shown in S101.
Thus, in the present processing example, the processing shown in S101 to S108 is repeated at the predetermined frame rate. In this way, the synthetic image generating unit 114 may repeat the generation of the synthetic image in accordance with the user's operation at predetermined time intervals.
The present invention is not limited to the above embodiments.
For example, part or all of the functions shown in fig. 15 may be implemented by the server 52.
For example, the railroad simulator system 1 illustrated in fig. 1 may not include the server 52. At this time, the server 50 may output the front composite image to the front display panel P4.
For example, the railroad simulator system 1 illustrated in fig. 1 may not include the server 50. At this time, the server 52 may implement the functions shown in fig. 15. The server 52 may output the rear composite image to the rear display panel P1, the left composite image to the left display panel P2, and the right composite image to the right display panel P3.
The above-described specific character string, numerical value, and specific character string in the drawings are examples, and are not limited to these character strings and numerical values.

Claims (7)

1. An image generation system, comprising:
a live-action image storage unit that stores a live-action image captured by a camera provided on an actual railway vehicle moving in an actual space in association with a virtual vehicle position in a virtual space;
a virtual vehicle position updating unit that updates the virtual vehicle position in the virtual space in accordance with a user operation;
a virtual space image generation unit that generates a virtual space image representing a situation in the virtual space viewed from a position of the viewpoint in the virtual space corresponding to the updated virtual vehicle position;
a live-action image acquisition unit that acquires 1 or a plurality of the live-action images specified based on the updated virtual vehicle position from the live-action image storage unit;
and a composite image generating unit that generates a composite image in which the generated virtual space image and the acquired 1 or more real-world images are combined.
2. The image generation system of claim 1,
the real-scene-image storing unit stores a combination of the plurality of real-scene images captured by the plurality of cameras provided on the actual railway vehicle in association with the virtual vehicle position,
the virtual space image generation means generates the virtual space image representing a situation of the virtual space viewed from a position of a plurality of viewpoints in the virtual space corresponding to any one of the cameras, in association with the plurality of viewpoints in the virtual space corresponding to the updated virtual vehicle position,
the live-action image acquisition means acquires 1 or a plurality of the combinations specified based on the updated virtual vehicle position from the live-action image storage means,
the synthetic image generating means generates a synthetic image of the virtual space image indicating a situation in the virtual space viewed from the position of the viewpoint corresponding to each of the plurality of cameras and 1 or more real-scene images captured by the camera included in the acquired 1 or more combinations.
3. The image generation system according to claim 2, further comprising:
a driver console that performs a simulated driving operation according to a driver trainer;
the train trainer can get on and off by simulating the railway vehicle;
a simulation platform provided adjacent to the simulation railway vehicle;
a display panel disposed in front of the driver's console, the display panel displaying a composite image of 1 or more live view images captured by the forward-facing camera positioned in front of the actual railway vehicle and the virtual space image representing a situation in which the virtual space is viewed from a position of the viewpoint corresponding to the camera; and
and a display panel provided in front of the simulation platform, the display panel displaying a composite image of 1 or more live view images captured by the forward facing camera located at a side of the actual railway vehicle and the virtual space image showing a state of the virtual space viewed from the position of the viewpoint corresponding to the camera.
4. The image generation system according to claim 2, further comprising:
the train trainer can get on and off by simulating the railway vehicle;
a simulation platform disposed adjacent to a side surface of the simulated railway vehicle;
a display panel provided behind the simulated railway vehicle, the display panel displaying a composite image of 1 or more live-action images captured by the rearward facing camera positioned behind the actual railway vehicle and the virtual space image representing a situation in which the virtual space is viewed from a position of the viewpoint corresponding to the camera; and
and a display panel provided in front of the simulation platform, the display panel displaying a composite image of 1 or more live view images captured by the forward facing camera located at a side of the actual railway vehicle and the virtual space image showing a state of the virtual space viewed from the position of the viewpoint corresponding to the camera.
5. The image generation system according to any one of claims 1 to 4, wherein the synthetic image generation unit repeatedly performs generation of the synthetic image according to the user's operation at predetermined time intervals.
6. An image generation method, characterized by comprising the steps of:
updating the virtual vehicle position in the virtual space according to the user operation;
generating a virtual space image representing a situation in the virtual space viewed from a position of a viewpoint in the virtual space corresponding to the updated virtual vehicle position;
acquiring 1 or more of the live-action images specified based on the updated position of the virtual vehicle from a live-action image storage means that stores live-action images captured by a camera installed on an actual railway vehicle moving in an actual space in association with the virtual vehicle position in the virtual space; and
and generating a composite image of the generated virtual space image and the acquired 1 or more live-action images.
7. A program for causing a computer to execute the steps of:
updating the virtual vehicle position in the virtual space according to the user operation;
generating a virtual space image representing a situation in the virtual space viewed from a position of a viewpoint in the virtual space corresponding to the updated virtual vehicle position;
acquiring 1 or more real-scene images specified based on the updated virtual vehicle position from a real-image storage means that stores real-scene images captured by a camera provided on a real railway vehicle moving in a real space in association with the virtual vehicle position in the virtual space; and
and a step of generating a composite image composed of the generated virtual space image and the acquired 1 or more real-world images.
CN201780097875.XA 2017-12-26 2017-12-26 Image generation system, image generation method, and program Pending CN111566706A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/046571 WO2019130413A1 (en) 2017-12-26 2017-12-26 Image generation system, image generation method, and program

Publications (1)

Publication Number Publication Date
CN111566706A true CN111566706A (en) 2020-08-21

Family

ID=67066755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780097875.XA Pending CN111566706A (en) 2017-12-26 2017-12-26 Image generation system, image generation method, and program

Country Status (4)

Country Link
JP (1) JP6717516B2 (en)
CN (1) CN111566706A (en)
TW (1) TW201928871A (en)
WO (1) WO2019130413A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230067790A (en) * 2021-11-09 2023-05-17 한국전자기술연구원 Electronic device for supporting of content edit and operation method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003330356A (en) * 2002-05-09 2003-11-19 East Japan Railway Co Bullet train simulator
WO2006064817A1 (en) * 2004-12-14 2006-06-22 Nihon University Operation simulator for railway
CN103080983A (en) * 2010-09-06 2013-05-01 国立大学法人东京大学 Vehicle system
CN104833368A (en) * 2015-05-12 2015-08-12 寅家电子科技(上海)有限公司 Live-action navigation system and method
CN106448336A (en) * 2016-12-27 2017-02-22 郑州爱普锐科技有限公司 Railway locomotive simulative operation training system and method thereof
US20170334356A1 (en) * 2016-05-18 2017-11-23 Fujitsu Ten Limited Image generation apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2837262B2 (en) * 1990-11-30 1998-12-14 三菱プレシジョン株式会社 Train operation training simulator
JP2003288002A (en) * 2002-03-28 2003-10-10 Mitsubishi Electric Corp Simulator for railway vehicle drive training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003330356A (en) * 2002-05-09 2003-11-19 East Japan Railway Co Bullet train simulator
WO2006064817A1 (en) * 2004-12-14 2006-06-22 Nihon University Operation simulator for railway
CN103080983A (en) * 2010-09-06 2013-05-01 国立大学法人东京大学 Vehicle system
CN104833368A (en) * 2015-05-12 2015-08-12 寅家电子科技(上海)有限公司 Live-action navigation system and method
US20170334356A1 (en) * 2016-05-18 2017-11-23 Fujitsu Ten Limited Image generation apparatus
CN106448336A (en) * 2016-12-27 2017-02-22 郑州爱普锐科技有限公司 Railway locomotive simulative operation training system and method thereof

Also Published As

Publication number Publication date
JP6717516B2 (en) 2020-07-01
JPWO2019130413A1 (en) 2020-07-02
TW201928871A (en) 2019-07-16
WO2019130413A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
US11484790B2 (en) Reality vs virtual reality racing
US9159152B1 (en) Mapping between a capture volume and a virtual world in a motion capture simulation environment
EP2629265A1 (en) Method and system for driving simulated virtual environments with real data
EP2175636A1 (en) Method and system for integrating virtual entities within live video
US20220153298A1 (en) Generating Motion Scenarios for Self-Driving Vehicles
CN112150885B (en) Cockpit system based on mixed reality and scene construction method
EP3426537A1 (en) Augmented windows
CN104040593B (en) Method and apparatus for 3D model deformation
CN112102680A (en) Train driving teaching platform and method based on VR
JP2005208857A (en) Method for generating image
CN111915956A (en) Virtual reality car driving teaching system based on 5G
JP2019532540A (en) Method for supporting a driver of a power vehicle when driving the power vehicle, a driver support system, and the power vehicle
JP6717516B2 (en) Image generation system, image generation method and program
JP7045093B2 (en) Image generation system, image generation method and program
JP2003162213A (en) Simulated environment creating device and simulated environment creating method
RU136618U1 (en) SYSTEM OF IMITATION OF THE EXTERNAL VISUAL SITUATION IN ON-BOARD MEANS FOR OBSERVING THE EARTH SURFACE OF THE SPACE SIMULATOR
JP6729952B1 (en) Railway simulator system, display control method and program
CN114830616A (en) Driver assistance system, crowdsourcing module, method and computer program
JP7261121B2 (en) Information terminal device and program
WO2024095356A1 (en) Graphics generation device, graphics generation method, and program
WO2019106863A1 (en) Railroad simulator system
Zeng et al. CAVE Based Visual System Design and Implementation in Marine Engine Room Simulation
JP2020166876A5 (en)
CN113641169A (en) Driving simulation system
Sivaraman Virtual reality based multi-modal teleoperation using mixed autonomy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200821