WO2022107294A1 - Vr image space generation system - Google Patents

Vr image space generation system Download PDF

Info

Publication number
WO2022107294A1
WO2022107294A1 PCT/JP2020/043282 JP2020043282W WO2022107294A1 WO 2022107294 A1 WO2022107294 A1 WO 2022107294A1 JP 2020043282 W JP2020043282 W JP 2020043282W WO 2022107294 A1 WO2022107294 A1 WO 2022107294A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
user
space
viewing device
Prior art date
Application number
PCT/JP2020/043282
Other languages
French (fr)
Japanese (ja)
Inventor
晃弘 安藤
智絵 水上
Original Assignee
株式会社ハシラス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ハシラス filed Critical 株式会社ハシラス
Priority to JP2022563511A priority Critical patent/JPWO2022107294A1/ja
Priority to PCT/JP2020/043282 priority patent/WO2022107294A1/en
Publication of WO2022107294A1 publication Critical patent/WO2022107294A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a VR video space generation system for constructing a virtual space that can be used by a user by VR (Virtual Reality), and in particular, the user can freely move and operate in the generated VR video space.
  • VR Virtual Reality
  • it is generated in a general file format including PNG, MP4, etc., and can be easily replaced without changing the VR video space generation system main body, readable independent, plane projection video, 3D video Or, by projecting image / video data (hereinafter referred to as "material") such as dome images as they are in the VR image space, the user can feel as if they are watching and experiencing them in the virtual space.
  • the present invention relates to a VR video space generation system that enables a user to effectively experience presentations, amusements, exhibitions, trainings, etc. using various materials in a VR space by performing the operation of.
  • Virtual reality refers to the technology for making users recognize and perceive in a virtual space created by a computer as if they were in that space, and the technology for creating such an environment and for that purpose.
  • Various business tools and amusement devices using such techniques have been developed to give convenience and entertainment to users, and have the possibility of further use and utilization in the future.
  • Japanese Patent Application Laid-Open No. 2018-142722 exists as a technology related to a system that provides a support method that demonstrates superiority to competitors in presentations of buildings, etc.
  • a 3D CG perspective which is two panoramic images that are stereograms based on the original drawing data, is generated.
  • the 3D CG perspective is converted into a panoramic VR image, and the converted panoramic VR image is associated with each other to generate a presentation document related to the building, and then associated with the presentation document in which the presentation document is used.
  • a technique relating to a system for displaying a panoramic VR image on a display device is disclosed.
  • the present invention is a VR video space generation system for constructing a virtual space that can be used by users by virtual reality, and in particular, one or more users are free to use the generated VR video space in the generated VR video space.
  • a readable, independent, planar projection image, or dome image that can be moved and operated, and is generated in a general file format and can be easily replaced without changing the system itself.
  • the VR video space generation system is a VR video space generation system that generates VR video for constructing a virtual space accessible to one or more users, and is a user.
  • a VR viewing device including one or a plurality of VR viewing devices, a video generating means for generating an initial image that can be displayed on the VR viewing device, a position information acquiring means for acquiring the position information of each of the VR viewing devices, and the above. Based on each position information acquired by the position information acquisition means, it is generated by the VR image generation means that generates a VR image by synthesizing an avatar image with the initial image generated by the image generation means, and the VR image generation means. It is configured to include a video output means for outputting the VR video to a VR viewing device.
  • the VR video space generation system numerically defines an area defining means that numerically defines a region that can be moved by the user as XYZ coordinates, and each position information acquired by the position information acquisition means numerically as XYZ coordinates.
  • Each position information defined by the position definition means is introduced (applied) as a coordinate value into the position definition means and the area defined by the area definition means, and the area and each position information are introduced into the video.
  • the configuration includes a mapping means associated with the initial image generated by the generation means.
  • the VR video space generation system is a VR video space generation system that generates VR video for constructing a virtual space accessible to one or a plurality of users, and is one or a plurality of VR video space generation systems worn by the user.
  • a VR viewing device consisting of a VR viewing device, a video generation means for generating an initial image that can be displayed on the VR viewing device, and an avatar image of each user who attaches the VR viewing device to the initial video generated by the video generation means. It is composed of a VR image generation means for generating a VR image and a video output means for outputting the VR image generated by the VR image generation means to a VR viewing device.
  • the VR video space generation system numerically defines the area defining means for numerically defining the area in which the user can move as XYZ coordinates and the position information of the user who wears the VR viewing device as XYZ coordinates.
  • Each position information defined by the position definition means is introduced (applied) as a coordinate value into the position definition means and the area defined by the area definition means, and the area and each position information are introduced into the video.
  • the configuration includes a mapping means associated with the initial image generated by the generation means.
  • the position defining means acquires information relating to the position of a specific part of the body of the user wearing the VR viewing device, calculates information regarding the position of each other part of the body, and generates the VR image generation means. However, it is configured to draw based on the above information and generate a VR image to be displayed on the VR viewing device.
  • the initial image is composed of an image selected from any of a plane projection image, a 3D image, and a dome image.
  • the initial video is created in a general format and is a plane projection containing readable, independent, user-instructed illustrations, 2D images, and textual information that can be easily replaced without changing other systems. Any one or a video, a presentation image consisting of a 2D moving image that operates according to the user's instruction, a 3D image including a CG stereoscopic model that operates according to the user's instruction, an all-sky image, or a part thereof. It is a configuration that includes a plurality. Further, the initial video is configured to be composed of independent video data that can be read by the video generation means.
  • the VR viewing device includes a sensor that detects the position and movement of the fingers of both hands of the user who wears the device, and the sensor detects the movement of the user's finger and then acquires the movement information as tracking information.
  • the VR video space generation system holds a plurality of action information consisting of finger movement information, and each of the action information is associated with a specific change process of the initial video, and the VR is described.
  • the video space generation system is configured to perform change processing of the associated initial video when the tracking information matches any of the action information.
  • the action information is composed of instruction information generated by simultaneously operating the fingers of both hands of the user.
  • the initial video includes an image such as a presentation and a moving image, and the action information is associated with the progress and backward processing of the initial video.
  • the present invention has the configuration as described in detail above, it has the following effects. 1.
  • the position information of the VR viewing device is acquired and the avatar image is synthesized with the initial image so as to correspond to the position information, it is possible to reflect the current position of the user in the VR image space and display it. , Multiple people can experience and share the VR video space at the same time. Further, by performing the calibration process, even a user at a remote place can experience and share the VR video space in the same manner.
  • the area defining means and the position defining means define the area of the VR video space and the position of the user in the area, it is possible to match the actually movable area with the area in the virtual space.
  • the VR video space generation system can be configured without using location information acquisition means, even users who do not acquire the user's realistic current location information or continuously acquire the current location information can also virtualize in the virtual space. It becomes possible to participate in the VR space using the specific location information.
  • the position defining means can be configured to numerically define each position information of the user who wears the VR viewing device as XYZ coordinates without using the position information acquisition means, when the user is in a remote place in Japan or abroad. However, it will be possible for multiple people to experience and share the VR video space at the same time.
  • the position defining means is configured to acquire information related to the position of a specific part of the user's body and calculate the position of other parts of the body, which one is used for each VR viewing device based on such accurate position information. It is possible to draw and display in detail whether such an image should be displayed. 6. Since the initial image is composed of a plane projection image, a 3D image, a dome image, etc., it is possible to experience all types and types of materials in the VR space.
  • 3D including illustrations that are switched according to the user's instructions, 2D images and a planar projection image that includes text information, a 2D moving image that operates according to the user's instructions, a 3D model of CG that operates according to the user's instructions, etc.
  • a VR video space where multiple users can share presentations, amusements, exhibitions, and training that include various materials because the configuration includes one or more of video, all-sky video, or some of them. It will be possible to do it within.
  • the initial video the video that is switched according to the user's instruction is displayed at a fixed position on the VR viewing device, so these materials are shared by multiple users as images / videos for presentations, amusements, exhibitions, and training. It becomes possible to display in space.
  • One VR video space generation system because the initial video is configured so that the video generation means conforms to a general file format and is generated from readable independent material that can be easily replaced without changing the system body. It is possible to display various VR video spaces and let the user experience them. 10. Since the sensor acquires the movement of the user's finger as tracking information and contrasts it with the preset action information, the user moves the finger to give a processing instruction such as the movement / change of the initial image. It becomes possible.
  • the action information is composed of instruction information generated by simultaneously moving the fingers of both hands of the user, it reacts only to the processing instruction by moving both hands, and the risk of issuing an erroneous processing instruction can be reduced.
  • the action information is configured to include instruction information corresponding to the progress and backward processing of the initial video, presentations, amusements, exhibitions, and trainings that involve sequential changes of images in the VR video space can be performed only with the movements of the user's hands and fingers. It is possible to proceed with.
  • FIG. 1a is a schematic diagram of a VR video space generation system according to the present invention
  • FIG. 1b is a schematic diagram of a VR video space generation system provided with an external computer
  • FIG. 2 is a schematic view showing a display example of a VR image
  • FIG. 3 is a diagram showing area / position information
  • FIG. 4a is a schematic diagram of a VR video space generation system provided to a remote user
  • FIG. 4b is a schematic diagram of a VR video space generation system provided to a remote user provided with an external computer
  • FIG. 5 is a schematic diagram of a VR video space generation system that performs switching processing of video materials and scenes.
  • the VR video space generation system 1 includes a VR viewing device 100, a video generation means 200, a position information acquisition means 300, a VR video generation means 400, and a video output. It is a system for constructing a virtual space using virtual reality (VR) technology, which comprises means 500 and means. It generates a VR video for constructing a virtual space that can be accessed by one or more users, and one or more users can freely move and operate in the generated VR video space, and it is also virtual. It is a system that makes it possible to give the user the feeling of being in space.
  • the VR video space generation system according to the present invention makes it possible to realize user communication in a virtual space, presentation, experience of attraction, exhibition, training, and the like.
  • the "VR" image displayed in the present invention also includes AR (Augmented Reality), MR (Mixed Reality), and SR (Substitutional Reality). It is a concept that includes.
  • the VR viewing device 100 is a device composed of one or a plurality of devices worn by a user, and is a device for playing / displaying a video used for playing a VR video 20 generated based on the initial video 10. ..
  • the device of this embodiment mainly consists of a goggle-type projection device, but is not limited to this, and may be worn by a user, for example, a retinal projection method for directly forming and projecting an image on the retina. Any projection device that is possible and gives a feeling of immersion in the image can be appropriately selected and used.
  • the VR viewing device 100 is equipped with a storage means 610 including an arithmetic unit (not shown) and an arbitrary storage medium, and the storage means 610 is equipped with an initial image 10 And the VR image 20 generated from this is stored.
  • the VR viewing device 100 is a projection device for enabling viewing of VR images, and in the present embodiment, it is mainly configured as a goggle type projection device, but the present invention is not limited to this. However, it is possible to use a projection device having another structure.
  • the image displayed on the VR viewing device 100 is mainly composed of an image that can be viewed in whole or in part at 360 degrees, and (1) a plane projection version (text, illustration and / or 2D image). It is possible to configure either (2) a 3D image or (3) a three-dimensional spherical dome image.
  • the VR viewing device 100 can be configured to be wirelessly or wiredly connected to the computer 600 or the cloud.
  • the computer 600 or the cloud is equipped with a storage means 610 composed of an arithmetic unit (not shown) and an arbitrary storage medium, and the storage means 610 is equipped with an initial image 10 and a VR generated from the initial image 10.
  • It is a configuration in which the video 20 is stored, and is also a configuration used for acquiring, calculating, and managing the position information and the like of the VR viewing device 100.
  • the initial video 10 and the VR video 20 generated from the initial video 10 can be managed, calculated, and held by the VR viewing device 100.
  • the image generation means 200 is a device that generates an initial image 10 that can be displayed on the VR viewing device 100.
  • the initial image 10 is composed of materials (images, moving images, etc.) such as a plane projection image, a 3D image, or a dome image, and can be, for example, a 360-degree all-sky image. Yes, it can be a flat image, it can be any of 3 dof and 6 dof, and it can be an image with disparity.
  • the image generation means 200 initially changes these basic image data into a flat output image (360 degree image or the like is also a kind of flat output image) which is a format that can be displayed on the VR viewing device 100. Generate video 10.
  • the image generation means 200 is configured such that the arithmetic unit of the VR viewing device 100 performs arithmetic processing based on various data stored in the storage medium 610 incorporated in the VR viewing device 100 to generate the initial image 10.
  • the format is not limited to this, and for example, a computer 600 or a cloud is provided outside, and an arithmetic unit performs arithmetic processing based on various data stored in the storage medium 610 on the computer 600 or the cloud. It is also possible to have a configuration that generates the initial video 10.
  • the video generation means 200 is configured to include software that the arithmetic unit reads from the storage medium 610 and processes the video, but the video generation means 200 is not limited thereto.
  • the position information acquisition means 300 is a means for acquiring each position information P of the VR viewing device 100.
  • the user of the VR video space generation system 1 according to the present invention is equipped with the VR viewing device 100, and grasps the current position information regarding a specific part of the body of each user by each VR viewing device or an external sensor.
  • the position of each other part of the body is calculated based on the position information P, the VR image generation means draws based on the information, and the avatar V of each user is displayed on the VR viewing device 100. Can be displayed.
  • the position information acquisition means 300 is equipped in each VR viewing device 100, and the position information P is calculated by the camera of the VR viewing device 100, but the present invention is not limited to this. It is also possible to install a sensor on the outside and use other technologies such as tracking by laser irradiation. Further, in the present embodiment, the position information acquisition means 300 is configured to include software that is read from the storage medium 610 by the arithmetic unit incorporated in the VR viewing device 100 and processed, but is not limited thereto.
  • the position information acquisition means 300 can include a configuration for acquiring, calculating and managing the position information of the user in a remote place.
  • the position information of the user in a remote place is acquired, the position information acquisition means 300 performs the calibration process, and the position information is assigned to the VR space in which another user exists so as to exist in that space. Perform video processing.
  • the calibration process is performed in the same manner, and the image processing is performed so as to move in the same way in the VR space.
  • one or more users in remote areas can experience and share the VR image space at the same time via the avatar image V, even if the users are in remote positions.
  • the arithmetic unit shifts the center of the calibrated world of each user (for example, if it moves a few centimeters to the right and a few centimeters to the left in the real world, it moves from the center of the world in VR space to the right. (Move a few centimeters, a few centimeters to the left), or when the user overlaps with another user's avatar, landscape, or object, it is possible to make it transparent (thin), etc. It is also possible to incorporate a configuration that performs video processing that enhances the user's immersive feeling in VR.
  • each position information P of the VR viewing device 100 can be numerically defined as the coordinates consisting of the XYZ axes.
  • the VR image generation means 400 is a means for synthesizing the avatar image V with the initial image 10 generated by the image generation means 200 to generate the VR image 20.
  • the VR image generation means 400 specifies a position and an orientation (or a posture) to be arranged in the initial image 10 based on the position information P of each VR viewing device 100 acquired by the position information acquisition unit 300. Then, a VR image 20 is generated by synthesizing the avatar image V of each user with the initial image 10. As a result, a VR image 20 in a state in which the realistic position of each user wearing the VR viewing device 100 is reflected in the initial image 10 is generated, and as shown in FIG. 2, each user is a VR image. It is possible to configure a state of participating in a virtual space consisting of 20.
  • the user can view the image of the virtual space displayed by another user and includes the image of the virtual space including the avatar of another person that can be seen from the position in the virtual space where he / she is, and he / she also enters the virtual space. It becomes possible to obtain a feeling as if it were, and the immersive feeling in the VR image 20 is enhanced.
  • the VR image generation means 400 is configured to include software that is read from the storage medium 610 and processed by the arithmetic unit incorporated in the VR viewing device 100, but is limited to this.
  • a computer 600 or a cloud may be provided externally, and the VR image generation means 400 installed in the computer 600 or on the cloud may perform arithmetic processing to generate the VR image 20.
  • the computer 600 or the cloud can acquire and manage the location information P, and the VR video 20 can be configured to be generated by the VR viewing device 100 that has acquired the information, and VR can be generated in any other manner. It is possible to select a configuration for generating the moving image 20.
  • the video output means 500 is a means for outputting the VR video 20 generated by the VR video generation means 400, in which the avatar video V of each user is synthesized with the initial video 10, to each VR viewing device 100. Since the position and orientation of each VR viewing device 100 are different, the images output to the VR viewing device 100 will be different (see FIG. 2).
  • the video output means 500 is configured by software that is read from the storage medium 610 by the arithmetic unit incorporated in the VR viewing device 100 and processed, but is not limited to this configuration, for example. It is also possible to provide an external computer 600 and have the video output means 500 equipped in the computer 600 perform arithmetic processing to output the VR video 20 to the VR viewing device 100 by wire or wirelessly. It is also possible to perform the processing on the cloud.
  • the VR video space generation system 1 prepares in advance after the arithmetic unit analyzes the video of the space acquired by the camera mounted on the VR viewing device 100 worn by the user. It is configured to project the spatial image onto the VR viewing device 100 in association with the image of the VR space. Further, as described above, the camera mounted on each VR viewing device 100 acquires the depth information, creates a mesh model that virtually exists according to the viewpoint position, and acquires and calculates the position information P. Then, a flat output image (360 degree image or the like is also a kind of flat output image) that can be seen from the angle is generated and displayed.
  • a flat output image 360 degree image or the like is also a kind of flat output image
  • the virtual space that each user sees through the VR viewing device 100 corresponds to the scenery seen in the real space including other users, and becomes the VR image 20 that combines the image of the virtual space with the avatar image V. ..
  • the VR video space generation system 1 includes an area defining means 310, a position defining means 320, and an associating means 410 in the VR viewing device 100.
  • the area defining means 310 is, for example, as shown in FIG. 3, a means for defining a user-movable area F in the virtual space generated by the VR video space generation system 1.
  • the area F coincides with a certain area provided in the real space.
  • the area is the actual space in which the user arbitrarily moves or sits.
  • the area defining means 310 defines a virtual space that matches this space as the area F.
  • the region F is numerically defined as, for example, coordinates consisting of XYZ axes, but is not limited to this, and it is possible to configure the VR space to be grasped and managed by using other region management means. ..
  • the area defining means 310 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is configured to include a software module or the like incorporated in the position information acquisition means 300.
  • the present invention is not limited to this, and it may be configured to be processed on independent software, software embedded in separately provided hardware, or the cloud.
  • the position defining means 320 is a means for numerically defining each position information P of the VR viewing device 100 as coordinates having XYZ axes. When the X value, Y value and / or Z value of the user's position information exceeds the maximum value, an exception handling such as displaying a warning message on the VR viewing device 100 can be considered.
  • the position defining means 320 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is composed of a software module or the like incorporated in the position information acquiring means 300.
  • the present invention is not limited to this, and it is of course possible to use independent software, software embedded in separately provided hardware, or a configuration for processing on the cloud.
  • the associating means 410 is a means for introducing and applying one or a plurality of position information Ps to the area F and then associating them with the initial image 10. Specifically, the associating means 410 introduces (applies) the position information P defined by the position defining means 320 as a coordinate value to the area F defined by the area defining means 310. Then, the area F and each position information P are associated with the initial image 10 generated by the image generation means 200. Specifically, for example, in this embodiment, a configuration using a technique related to inverse kinematics is possible. The details of this mapping will be described later.
  • each position information P exists in the real space in the area F defined to match the real space. Defined to match the user's position. Then, in order to apply each of the information to the initial image 10, the avatar image V of each user is displayed at a position where the avatar image V of each user is actually present in the initial image 10.
  • the associating means 410 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is composed of a software module or the like incorporated in the VR video generating means 400.
  • the present invention is not limited to this, and it is of course possible to use independent software, software embedded in separately provided hardware, or a configuration for processing on the cloud.
  • the VR video space generation system 1 is configured to include an area defining means 310, a position defining means 320, and an associating means 410 in the computer 600.
  • the VR video space generation system 2 can be configured without using the position information acquisition means 300 as shown in FIGS. 4a and 4b. That is, the configuration is such that the current and current realistic position of each VR viewing device 100 worn by each user is not acquired, and the position defining means 320 calculates the user's virtual position information P in the virtual space and XYZ axes.
  • the VR image generation means 400 uses the position information P to numerically define the coordinates as the coordinates consisting of the coordinates, and the VR image generation means 400 attaches the VR viewing device 100 to the initial image 10 generated by the image generation means 200 as an avatar image of each user. V is combined to generate a VR image 20. At this time, it is conceivable to set an arbitrary position so that the avatars of users in remote areas who are not there do not overlap each other.
  • the position defining means 320 arbitrarily sets the position information P on the assumption that the position information P is at an arbitrary place in the real space while communicating with the other VR viewing device 100. It is numerically defined as the coordinates consisting of the XYZ axes.
  • the arithmetic unit performs a process of shifting the center of the calibrated world of each user, and a process of making (thinning) the avatar, landscape, or object of another user when the user overlaps with the user.
  • the calibration process may be performed by the position defining means 320.
  • each position information P of the VR viewing device 100 can be numerically defined as the coordinates consisting of the XYZ axes.
  • FIG. 4b it is possible to provide a configuration in which an external computer 600 is provided or a cloud is used to centrally manage various types of information.
  • the computer 600 or the position defining means 320 on the cloud communicates with each VR viewing device 100 and then the position information P is located at an arbitrary place in the real space. It is configured to be arbitrarily set and numerically defined as the coordinates consisting of the XYZ axes.
  • the position defining means 320 is configured to acquire information related to the position of a specific part of the body of the user wearing the VR viewing device 100 and to calculate the position of each other part of the body. ..
  • the associating means 410 associates the area F with the position information P including these information with the initial image 10 generated by the image generating means 200.
  • the VR image generation means 400 draws based on these information and generates a VR image 20 to be displayed on the VR viewing device 100. With this configuration, the VR image 20 displayed on the VR viewing device 100 worn by the user becomes accurate according to the movement of the user, and the user is immersed in the VR image 20 as if the virtual space is real. It is possible to get a feeling.
  • the technique related to inverse kinematics can be used to generate the avatar video V.
  • Inverse kinematics is a technique for calculating the position and rotation angle of a higher-level object by designating the target position of a lower-level object in an object having a hierarchical structure, and is used for calculating the operation of the avatar video V.
  • the VR viewing device 100 or the computer 600 incorporates an arithmetic unit, and the position defining means 320 acquires information related to the position of a specific part of the user's body.
  • the arithmetic unit performs arithmetic processing on this information using the inverse kinematics technique, identifies the positions of other parts of the body, and the mapping means 410 performs the mapping processing, and then the VR viewing device 100. It is a configuration to generate a VR image 20 to be displayed in.
  • the user in the presentation venue watches the presentation by the VR viewing device 100 in the virtual space superimposed on the real space, and is in a remote place.
  • the user also watches the presentation by the VR viewing device 100 in the same virtual space superimposed on the real space. Since the avatar image V is displayed on the VR image 20 for the user in the remote place, the user in the presentation venue can recognize that the user in the remote place is also in the same place.
  • a user in a remote place can get the experience of being in the presentation venue through the VR viewing device 100.
  • the VR video space generation systems 1 and 2 have a configuration corresponding to 6 dof or 3 dof.
  • the 3DOF is a VR viewing device 100 corresponding to three movements of the X-axis, the Y-axis, and the Z-axis, and senses the rotation and tilt of the head equipped with the VR viewing device 100.
  • the 6 dof has a configuration corresponding to 6 movements in which movements in the X-axis, Y-axis, and Z-axis directions are added in addition to the movements of 3 dof.
  • the user can enjoy the 3DOF image while maintaining the immersive feeling of 6DOF. That is, it is possible to use the illusion of the experience of 6DOF to show a large number of existing 3DOF images while maintaining the immersive feeling of 6DOF.
  • an image equivalent to what is shown in the range that can be seen in the captured 3dof image is CG generated in 6dof, and then the 3dof is guided to the position where the 3dof was photographed in the 6dof experience, and the immersive feeling can be maintained. It is possible to replace 6dof with 3dof in a suitable position and show this continuously to show 3dof while maintaining the immersive feeling of 6dof.
  • the VR video space generation systems 1 and 2 give each user the feeling of being in a virtual space, allowing the users to communicate with each other in the virtual space, and a plurality of users in remote locations. Can be collected in one VR space, and it is possible to experience presentations, amusements, exhibitions, training, etc. in the VR space.
  • the VR video space generation system 1 can be operated only locally without using an external network such as the Internet. That is, it is possible to give a presentation or the like using the VR video space generation system 1 even in a situation where there is no network environment that can be connected to the outside such as the Internet.
  • the initial video 10 is generated in a general file format including PNG, MP4, etc., and can be easily replaced without changing the main body of the VR video space generation system, and is a readable independent plane. It is composed of projected images, 3D images, dome images, and the like. With this configuration, it has become possible for users who use the VR video space generation systems 1 and 2 to experience materials of all formats and types in the VR space.
  • the initial video 10 is generated in a general file format, and can be easily replaced without changing the VR video space generation system main body. It is a readable independent illustration, 2D image and characters that are switched according to the user's instruction. It is possible to configure a configuration including a plane projection image including information and a plane projection image composed of a 2D moving image that operates according to a user's instruction. Further, it is possible to configure the configuration to include a stereoscopic model of CG that operates according to the user's instruction, or to include a spherical image that operates according to the user's instruction or a part of the image.
  • Each of these images is generated by a general file format such as PNG or MP4, and is fixedly embedded in the VR space (in the initial image 10) or always displayed in the direction in which the user turns his face. It is an independent image that can be displayed or any display method can be selected. Moreover, since it can be embedded arbitrarily, it can be easily replaced without changing other systems.
  • the user can view the plane projection image, 3D image, and spherical image embedded in the virtual space, in order to introduce the structure and specifications of automobiles, real estate, etc. Presentations, amusements, exhibitions, and training can now be easily experienced and shared by users. In addition, it has become possible for multiple users to experience and share various attractions.
  • the initial video 10 can be configured to be composed of independent video data that can be read by the video generation means 200.
  • the video generation means 200 can read an arbitrary initial video 10 from various existing materials including image / video data generated in a general file format, and one VR video space. It has become possible to display an arbitrary VR video space desired by the generation system and let the user experience it.
  • presentation materials created in another format are inserted at a fixed position and displayed in the VR space in which the user participates (for example, the VR space is composed of an auditorium).
  • Presentation materials are fixed and embedded behind the lecture hall).
  • the VR video space generation system 1 and 2 it is possible to include an image in the VR space.
  • the video of the instructor embedded in the virtual space is always displayed in front of the user. It is possible to provide a VR video space generation system 1 and 2 that can always display a projected image at a specific position (for example, in front of the eyes) regardless of the user's posture and is highly convenient for the user. Is now possible.
  • the VR viewing device 100 includes a sensor 110 in this embodiment.
  • the sensor 110 is a sensor that detects the position and movement of the fingers of both hands of the user wearing the VR viewing device 100.
  • the sensor 110 detects the movements of the hands and fingers of the user wearing the VR viewing device 100, and then acquires the information related to the movements as the tracking information T.
  • the tracking information T is information related to the movements of the fingers of both hands of the user, and is composed of information related to a series of movements of the hands and fingers within a certain period of time.
  • the sensor 100 may be installed in a device other than the VR viewing device 100 to detect the positions and movements of the fingers of both hands of the user who wears the VR viewing device 100.
  • the path of a hand or finger from a certain point to a certain point is traced for each of the left and right hands and acquired and saved as tracking information T. do.
  • the VR video space generation systems 1 and 2 hold the action information A.
  • the action information A is information that tracks a series of finger movements within a certain period of time, and in this embodiment, a plurality of patterns of action information A are retained.
  • the action information A is composed of data stored in the storage medium 610 of the VR viewing device 100 or the computer 600.
  • each of the plurality of action information A is configured to be associated with a specific change process of the initial image 10. Further, when the acquired tracking information T matches any of the stored action information A, the change processing of the associated initial video 10 is performed.
  • the initial video 10 includes a presentation image and the like
  • the action information A is associated with the progress and backward processing of the initial video.
  • the action of moving the finger from right to left is registered and held as the action information A
  • the action information A changes the image so as to sequentially switch the presentation image displayed in a part of the initial image 10.
  • the sensor 110 detects the position and movement of the finger, and the information related to the operation is used as the tracking information T. get.
  • the VR video space generation systems 1 and 2 perform processing for comparing the tracking information T that traces the movement of the finger and the action information A, and when it is determined that they are the same, the associated presentation video is sequentially switched. It is a configuration that processes images.
  • action information A such as the movement of making a circle by contacting the thumb with the index finger or middle finger.
  • the action information A is configured to include instruction information generated by simultaneously operating the fingers of both hands of the user.
  • the initial video 10 includes a presentation video composed of a plurality of images, and the action information A is associated with the progress and backward processing of the presentation video.
  • the action of contacting the thumb and the index finger of both hands or the thumb and the middle finger is registered as the action information A, and this action is made to correspond to the progress (or backward) processing of the presentation image.
  • the VR video space generation systems 1 and 2 detect tracking information T that traces the movement of the finger.
  • comparison with the action information A is performed, it is confirmed that they match, and the progress processing of the presentation video associated with the action information A is performed.
  • the user can feel as if he / she is in the virtual space, communicate with each other in the virtual space, and multiple users in remote areas gather in one VR space.
  • various materials including independent images and videos that can be chilled, generated in a general file format, and readable, into the VR video space as they are, you can watch them in the virtual space.
  • you can watch them in the virtual space.
  • Schematic diagram of the VR video space generation system Schematic diagram of a VR video space generation system equipped with an external computer Schematic diagram showing a display example of a VR image Diagram showing area / location information
  • Schematic diagram of VR video space generation system provided to users in remote areas Schematic diagram of a VR video space generation system provided to users in remote areas equipped with an external computer Schematic diagram of a VR video space generation system that switches video materials and scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

[Problem] To provide a VR image space generation system which makes it possible to give users a feeling of being in a virtual space and for the users to communicate with each other in the virtual space, collect a plurality of users in remote areas in one VR space, give the users the feeling of watching and experiencing various materials, including independent readable image/video data, generated in a general file format in the virtual space by projecting the materials into the VR image space as is, or switch between the pages of these materials, the materials themselves, and the scenes by operating both hands and fingers at the same time. [Solution] Provided is a VR image space generation system which generates a VR image for constructing a virtual space accessible by one or a plurality of users, comprising: a VR viewing device worn by the user; an image generation means for generating an initial image that can be displayed on the VR viewing device; a position information acquisition means for acquiring position information on each VR viewing device; a VR image generation means for generating a VR image by synthesizing an avatar image with an initial image generated by the image generation means, on the basis of each piece of position information acquired by the position information acquisition means; and an image output means for outputting a VR image to the VR viewing device.

Description

VR映像空間生成システムVR video space generation system
 本発明は、VR(バーチャルリアリティ)によりユーザが利用可能な仮想空間を構成するためのVR映像空間の生成システムに関し、特に、生成されたVR映像空間内をユーザが自由に移動・動作等することを可能としたうえ、PNGやMP4等を含む一般的なファイルフォーマットにて生成され、VR映像空間生成システム本体を変更せずに容易に差し替えできる、読取り可能な独立した、平面投影映像、3D映像、またはドーム映像等の画像・動画データ(以下、「素材」という)を、VR映像空間内にそのまま投影することで、恰も仮想空間内においてこれらを視聴、体験しているような感覚をユーザに与えることを可能とし、かつ、これらの素材の頁や素材自体、場面の切り替え、すなわち画像・動画データを含む上記素材の再生・停止・コマ送り・切替等の処理を両手指の同時操作或いはコントローラの操作にて行うことにより、VR空間内において各種素材によるプレゼンテーションやアミューズメント、エキシビション、トレーニングなどを効果的にユーザに体験させることを可能としたVR映像空間生成システムに関する。 The present invention relates to a VR video space generation system for constructing a virtual space that can be used by a user by VR (Virtual Reality), and in particular, the user can freely move and operate in the generated VR video space. In addition, it is generated in a general file format including PNG, MP4, etc., and can be easily replaced without changing the VR video space generation system main body, readable independent, plane projection video, 3D video Or, by projecting image / video data (hereinafter referred to as "material") such as dome images as they are in the VR image space, the user can feel as if they are watching and experiencing them in the virtual space. It is possible to give, and the page of these materials, the material itself, and the scene switching, that is, the processing such as play / stop / frame advance / switching of the above material including image / video data can be performed simultaneously with both hands or a controller. The present invention relates to a VR video space generation system that enables a user to effectively experience presentations, amusements, exhibitions, trainings, etc. using various materials in a VR space by performing the operation of.
 従来より、ユーザが映像を視聴することで、様々な疑似体験を得ることを可能としたバーチャルリアリティ(VR)に関する技術が数多く開発され、様々な場面で活用されている。バーチャルリアリティに関する技術は、進歩を遂げ、あらゆる分野で活用されており、室内等に居ながらにして現実的に仮想空間のその場にいるような体感を得ることを可能にするVR技術が数多く開発され、使用されている。 Conventionally, many technologies related to virtual reality (VR) that enable users to obtain various simulated experiences by watching videos have been developed and are used in various situations. Technology related to virtual reality has made progress and is being used in all fields, and many VR technologies have been developed that make it possible to get the feeling of being in the virtual space while staying indoors. Has been used.
 バーチャルリアリティとは、コンピュータが作り出す仮想空間において、ユーザが恰もその空間に自らがいるかのように認識・知覚させるための技術、また、そのような環境を作り出すこと及びそのための技術のことをいい、このような技術を用いた様々なビジネスツールやアミューズメント装置が開発され、ユーザに利便性や娯楽性を与えており、今後も更なる利用・活用の可能性を有している。 Virtual reality refers to the technology for making users recognize and perceive in a virtual space created by a computer as if they were in that space, and the technology for creating such an environment and for that purpose. Various business tools and amusement devices using such techniques have been developed to give convenience and entertainment to users, and have the possibility of further use and utilization in the future.
 バーチャルリアリティ技術を用いた空間を生成するためのシステムとしては、例えば、特開2018-147272号公報が存在する。ここでは、建築物等のプレゼンテーションにおいて、競合他者に対する優位性を発揮する支援方法を提供するシステムに関する技術として、原図データに基づきステレオグラムとなる2つのパノラマ画像である3次元CGパースを生成し、該3次元CGパースをパノラマVR画像に変換すると共に、変換後のパノラマVR画像の対応付けを行い、建築物に関するプレゼンテーション書類を生成した上で、該プレゼンテーション書類が使用されるプレゼンテーションにおいて対応付けられたパノラマVR画像を表示装置に表示させるシステムに関する技術が開示されている。 As a system for creating a space using virtual reality technology, for example, Japanese Patent Application Laid-Open No. 2018-142722 exists. Here, as a technology related to a system that provides a support method that demonstrates superiority to competitors in presentations of buildings, etc., a 3D CG perspective, which is two panoramic images that are stereograms based on the original drawing data, is generated. , The 3D CG perspective is converted into a panoramic VR image, and the converted panoramic VR image is associated with each other to generate a presentation document related to the building, and then associated with the presentation document in which the presentation document is used. A technique relating to a system for displaying a panoramic VR image on a display device is disclosed.
 この技術によれば、確かに、VR空間におけるプレゼンテーションを行うことが可能になるとも考えられるが、3dofでの視聴にとどまり、6dofでの没入感のあるVR体験を行うことが出来ない、複数のユーザがプレゼンテーションに参加する場合に、相互にコミュニケーションを取るなど、複雑な構成を採る事が出来ない、といった致命的な問題点があった。また、遠隔地における複数のユーザを一のVR空間に集めるという構成も取れないという問題点があった。さらに、建築物等のプレゼンテーション以外のVR空間を生成する場合に、専用ソフトウェアにて一から再構築することが必要であり、一般のフォーマットによる、読み取り可能な独立した素材をそのまま利用することができない、VR空間内において、VR体験者が素材の頁や素材自体、場面の切り替えを感覚的に行うことができない、という問題点があった。 It is possible that this technology will certainly make it possible to give presentations in VR space, but it is not possible to have an immersive VR experience in 6DOF, which is limited to viewing in 3DOF. When users participate in a presentation, there is a fatal problem that it is not possible to adopt a complicated structure such as communicating with each other. In addition, there is a problem that it is not possible to take a configuration in which a plurality of users in a remote place are gathered in one VR space. Furthermore, when creating a VR space other than a presentation such as a building, it is necessary to reconstruct it from scratch with dedicated software, and it is not possible to use an independent readable material as it is in a general format. In the VR space, there is a problem that the VR experience person cannot intuitively switch the page of the material, the material itself, or the scene.
 そこで、VR映像空間内をユーザが自由に移動・動作等することを可能とすることで、恰も仮想空間内にいるような感覚をユーザに与えるとともに、複数のユーザが仮想空間内で相互にコミュニケーションを取ったり、遠隔地における複数のユーザを一のVR空間に集めたりすることができ、一般のファイルフォーマットによる、読取り可能な独立した素材を効果的にVR空間に投影でき、かつ、VR空間内において、VR体験者が素材の頁や素材自体、場面の切り替えを感覚的に行うことができる仮想空間を構成するためのVR映像空間を生成するためのVR映像空間生成システムの開発が望まれていた。 Therefore, by allowing the user to freely move and operate in the VR video space, the user feels as if he / she is in the virtual space, and multiple users communicate with each other in the virtual space. It is possible to take a picture or collect multiple users in a remote place in one VR space, and it is possible to effectively project readable independent material into the VR space by a general file format, and in the VR space. In, it is desired to develop a VR video space generation system for generating a VR video space for constructing a virtual space in which a VR experience person can intuitively switch between the page of the material, the material itself, and the scene. rice field.
特開2018-147272号公報Japanese Unexamined Patent Publication No. 2018-147272
 本発明は、バーチャルリアリティによりユーザが利用可能な仮想空間を構成するためのVR映像空間の生成システムであって、特に、生成されたVR映像空間内を一または複数のユーザがその中で自由に移動・動作等することを可能としたうえ、一般的なファイルフォーマットにて生成され、システム本体を変更せずに容易に差し替えできる、読取り可能な独立した、平面投影映像、3D映像、またはドーム映像等の画像や動画からなる素材を、VR映像空間内にそのまま投影することで、恰も実際に仮想空間内においてこれらを視聴、体験しているような感覚をユーザに与えることを可能とし、ユーザが仮想空間内で相互にコミュニケーションを取ったり、遠隔地における複数のユーザを一のVR空間に集めたりすることを可能とし、かつ、これらの頁や素材自体、場面の切り替え、すなわち画像・動画データを含む上記素材の再生・停止・コマ送り・切替等の処理を両手指の同時操作或いはコントローラの操作にて行うことにより、VR空間内において各種素材によるプレゼンテーションやアミューズメント、エキシビジョン、トレーニングなどを効果的にユーザに体験させることを可能としたVR映像空間生成システムを提供することを目的とする。 The present invention is a VR video space generation system for constructing a virtual space that can be used by users by virtual reality, and in particular, one or more users are free to use the generated VR video space in the generated VR video space. A readable, independent, planar projection image, or dome image that can be moved and operated, and is generated in a general file format and can be easily replaced without changing the system itself. By projecting the material consisting of images and videos such as these into the VR video space as it is, it is possible to give the user the feeling of actually watching and experiencing them in the virtual space. It is possible to communicate with each other in a virtual space, gather multiple users in a remote place in one VR space, and switch between these pages, materials themselves, and scenes, that is, image / video data. By performing processing such as play / stop / frame advance / switching of the above materials including by operating both hands at the same time or by operating the controller, presentations, amusements, exhibitions, training, etc. using various materials can be effectively performed in the VR space. It is an object of the present invention to provide a VR video space generation system that enables a user to experience it.
 上記の目的を達成するために本発明に係るVR映像空間生成システムは、一または複数のユーザがアクセス可能な仮想空間を構成するためのVR映像を生成するVR映像空間生成システムであって、ユーザが装着する一または複数からなるVR視聴装置と、前記VR視聴装置に表示可能な初期映像を生成する映像生成手段と、前記VR視聴装置の各々の位置情報を取得する位置情報取得手段と、前記位置情報取得手段により取得された各位置情報に基づき、前記映像生成手段で生成された初期映像にアバター映像を合成してVR映像を生成するVR映像生成手段と、前記VR映像生成手段により生成されたVR映像をVR視聴装置に出力する映像出力手段と、からなる構成である。 In order to achieve the above object, the VR video space generation system according to the present invention is a VR video space generation system that generates VR video for constructing a virtual space accessible to one or more users, and is a user. A VR viewing device including one or a plurality of VR viewing devices, a video generating means for generating an initial image that can be displayed on the VR viewing device, a position information acquiring means for acquiring the position information of each of the VR viewing devices, and the above. Based on each position information acquired by the position information acquisition means, it is generated by the VR image generation means that generates a VR image by synthesizing an avatar image with the initial image generated by the image generation means, and the VR image generation means. It is configured to include a video output means for outputting the VR video to a VR viewing device.
 また、前記VR映像空間生成システムは、ユーザが移動可能な領域をXYZ座標として数値的に定義する領域定義手段と、前記位置情報取得手段によって取得された各位置情報をXYZ座標として数値的に定義する位置定義手段と、前記領域定義手段によって定義された領域に、前記位置定義手段によって定義された各位置情報を座標値として導入(適用)するとともに、該領域と各位置情報とを、前記映像生成手段によって生成された初期映像に対応付ける対応付け手段と、を備える構成である。 Further, the VR video space generation system numerically defines an area defining means that numerically defines a region that can be moved by the user as XYZ coordinates, and each position information acquired by the position information acquisition means numerically as XYZ coordinates. Each position information defined by the position definition means is introduced (applied) as a coordinate value into the position definition means and the area defined by the area definition means, and the area and each position information are introduced into the video. The configuration includes a mapping means associated with the initial image generated by the generation means.
 また、本発明に係るVR映像空間生成システムは、一または複数のユーザがアクセス可能な仮想空間を構成するためのVR映像を生成するVR映像空間生成システムであって、ユーザが装着する一または複数からなるVR視聴装置と、前記VR視聴装置に表示可能な初期映像を生成する映像生成手段と、前記映像生成手段で生成された初期映像にVR視聴装置を装着する各ユーザのアバター映像を合成してVR映像を生成するVR映像生成手段と、前記VR映像生成手段により生成されたVR映像をVR視聴装置に出力する映像出力手段と、からなる構成である。また、前記VR映像空間生成システムは、ユーザが移動可能な領域をXYZ座標として数値的に定義する領域定義手段と、前記VR視聴装置を装着するユーザの各位置情報をXYZ座標として数値的に定義する位置定義手段と、前記領域定義手段によって定義された領域に、前記位置定義手段によって定義された各位置情報を座標値として導入(適用)するとともに、該領域と各位置情報とを、前記映像生成手段によって生成された初期映像に対応付ける対応付け手段と、を備える構成である。 Further, the VR video space generation system according to the present invention is a VR video space generation system that generates VR video for constructing a virtual space accessible to one or a plurality of users, and is one or a plurality of VR video space generation systems worn by the user. A VR viewing device consisting of a VR viewing device, a video generation means for generating an initial image that can be displayed on the VR viewing device, and an avatar image of each user who attaches the VR viewing device to the initial video generated by the video generation means. It is composed of a VR image generation means for generating a VR image and a video output means for outputting the VR image generated by the VR image generation means to a VR viewing device. Further, the VR video space generation system numerically defines the area defining means for numerically defining the area in which the user can move as XYZ coordinates and the position information of the user who wears the VR viewing device as XYZ coordinates. Each position information defined by the position definition means is introduced (applied) as a coordinate value into the position definition means and the area defined by the area definition means, and the area and each position information are introduced into the video. The configuration includes a mapping means associated with the initial image generated by the generation means.
 また、前記位置定義手段は、VR視聴装置を装着するユーザの、身体の特定の部位の位置に係る情報を取得し、身体の他の各部位の位置に関する情報を演算し、前記VR映像生成手段が、前記情報をもとに描画を行い、VR視聴装置に表示するVR映像を生成する構成である。 Further, the position defining means acquires information relating to the position of a specific part of the body of the user wearing the VR viewing device, calculates information regarding the position of each other part of the body, and generates the VR image generation means. However, it is configured to draw based on the above information and generate a VR image to be displayed on the VR viewing device.
 また、前記初期映像は、平面投影映像、3D映像、またはドーム映像の何れかから選択される映像からなる構成である。
 また、前記初期映像は、一般的なフォーマットで作成され、他のシステムを変更せずに容易に差し替えできる、読取り可能な独立した、ユーザの指示によって切り替わるイラスト、2D画像および文字情報を含む平面投影映像、ユーザの指示によって動作する2D動画からなるプレゼンテーション画像、ユーザの指示によって動作するCGの立体モデル等を含む3D映像、全天球映像又はそのうちの一部の映像、のうちの何れか一または複数を含む構成である。
 また、前記初期映像は、前記映像生成手段が読み取り可能な独立した映像データからなる構成である。
Further, the initial image is composed of an image selected from any of a plane projection image, a 3D image, and a dome image.
In addition, the initial video is created in a general format and is a plane projection containing readable, independent, user-instructed illustrations, 2D images, and textual information that can be easily replaced without changing other systems. Any one or a video, a presentation image consisting of a 2D moving image that operates according to the user's instruction, a 3D image including a CG stereoscopic model that operates according to the user's instruction, an all-sky image, or a part thereof. It is a configuration that includes a plurality.
Further, the initial video is configured to be composed of independent video data that can be read by the video generation means.
 また、前記VR視聴装置は、該装置を装着したユーザの両手の指の位置および動きを検知するセンサーを備え、該センサーがユーザの指の動作を検知した上で該動作情報をトラッキング情報として取得するとともに、前記VR映像空間生成システムは、指の動作情報からなる複数のアクション情報を保持しており、該アクション情報は、各々前記初期映像の特定の変化処理に対応付けられており、前記VR映像空間生成システムは、前記トラッキング情報が前記アクション情報の何れかと一致した場合に、対応付けられている前記初期映像の変化処理を行う構成である。 Further, the VR viewing device includes a sensor that detects the position and movement of the fingers of both hands of the user who wears the device, and the sensor detects the movement of the user's finger and then acquires the movement information as tracking information. At the same time, the VR video space generation system holds a plurality of action information consisting of finger movement information, and each of the action information is associated with a specific change process of the initial video, and the VR is described. The video space generation system is configured to perform change processing of the associated initial video when the tracking information matches any of the action information.
 また、前記アクション情報は、ユーザの両手の指を同時に動作することにより発生する指示情報からなる構成である。
 更に、前記初期映像は、プレゼンテーション等の画像、動画を含むとともに、前記アクション情報は、前記初期映像の進行および後退処理に対応付けられる構成である。
Further, the action information is composed of instruction information generated by simultaneously operating the fingers of both hands of the user.
Further, the initial video includes an image such as a presentation and a moving image, and the action information is associated with the progress and backward processing of the initial video.
 本発明は、上記詳述した通りの構成であるので、以下のような効果がある。
1.VR視聴装置の位置情報を取得するとともに、初期映像に該位置情報に対応するようにアバター映像を合成する構成としたため、VR映像空間内にユーザの現在位置を反映させて表示させることが可能となり、複数人が同時にVR映像空間を体験・共有することが可能となる。また、較正処理を行うことで遠隔地にいるユーザであっても同様にVR映像空間を体験・共有することが可能となる。
2.領域定義手段と位置定義手段によってVR映像空間の領域とその中のユーザの位置とを定義する構成としたため、現実的に移動可能な領域と仮想空間内の領域を一致させることが可能となる。
Since the present invention has the configuration as described in detail above, it has the following effects.
1. 1. Since the position information of the VR viewing device is acquired and the avatar image is synthesized with the initial image so as to correspond to the position information, it is possible to reflect the current position of the user in the VR image space and display it. , Multiple people can experience and share the VR video space at the same time. Further, by performing the calibration process, even a user at a remote place can experience and share the VR video space in the same manner.
2. 2. Since the area defining means and the position defining means define the area of the VR video space and the position of the user in the area, it is possible to match the actually movable area with the area in the virtual space.
3.VR映像空間生成システムが位置情報取得手段を用いない構成も可能としたため、ユーザの現実的な現在位置情報を取得せず、または現在位置情報を継続的に取得しないユーザも、仮想空間内の仮想的な位置情報を用いてVR空間に参加可能となる。
4.位置定義手段が、位置情報取得手段によらずに、VR視聴装置を装着するユーザの各位置情報をXYZ座標として数値的に定義する構成も可能としたため、ユーザが国内外の遠隔地にいた場合でも、複数人が同時にVR映像空間を体験・共有することが可能となる。
3. 3. Since the VR video space generation system can be configured without using location information acquisition means, even users who do not acquire the user's realistic current location information or continuously acquire the current location information can also virtualize in the virtual space. It becomes possible to participate in the VR space using the specific location information.
4. Since the position defining means can be configured to numerically define each position information of the user who wears the VR viewing device as XYZ coordinates without using the position information acquisition means, when the user is in a remote place in Japan or abroad. However, it will be possible for multiple people to experience and share the VR video space at the same time.
5.位置定義手段が、ユーザの身体の特定の部位の位置に係る情報を取得したうえ、身体の他の部位の位置を演算する構成としたため、かかる正確な位置情報を踏まえ、VR視聴装置ごとにどのような映像を表示すべきかを詳細に描画して表示することが可能となる。
6.初期映像が、平面投影映像、3D映像、ドーム映像等からなる構成としたため、VR空間において、あらゆる形式、タイプの素材を体験することが可能となる。
5. Since the position defining means is configured to acquire information related to the position of a specific part of the user's body and calculate the position of other parts of the body, which one is used for each VR viewing device based on such accurate position information. It is possible to draw and display in detail whether such an image should be displayed.
6. Since the initial image is composed of a plane projection image, a 3D image, a dome image, etc., it is possible to experience all types and types of materials in the VR space.
7.初期映像として、ユーザの指示によって切り替わるイラスト、2D画像および文字情報を含む平面投影映像、ユーザの指示によって動作する2D動画からなる平面投影映像、ユーザの指示によって動作するCGの立体モデル等を含む3D映像、全天球映像又はそのうちの一部の映像、のうちの何れか一または複数を含む構成としたため、各種素材を含むプレゼンテーションやアミューズメント、エキシビション、トレーニングを、複数のユーザが共有するVR映像空間内で行うことが可能となる。
8.初期映像として、ユーザの指示によって切り替わる映像をVR視聴装置の固定位置に表示させる構成としたため、これらの素材をプレゼンテーションやアミューズメント、エキシビション、トレーニング用の画像・映像として、複数のユーザが共有するVR映像空間内に表示することが可能となる。
7. As initial images, 3D including illustrations that are switched according to the user's instructions, 2D images and a planar projection image that includes text information, a 2D moving image that operates according to the user's instructions, a 3D model of CG that operates according to the user's instructions, etc. A VR video space where multiple users can share presentations, amusements, exhibitions, and training that include various materials because the configuration includes one or more of video, all-sky video, or some of them. It will be possible to do it within.
8. As the initial video, the video that is switched according to the user's instruction is displayed at a fixed position on the VR viewing device, so these materials are shared by multiple users as images / videos for presentations, amusements, exhibitions, and training. It becomes possible to display in space.
9.初期映像が、映像生成手段が、一般的なファイルフォーマットに準拠し、システム本体を変更せずに容易に差し替えできる、読み取り可能な独立した素材から生成する構成としたため、一のVR映像空間生成システムで様々なVR映像空間を表示させてユーザに体験させることが可能となる。
10.センサーがユーザの指の動作をトラッキング情報として取得し、予め設定されたアクション情報と対比する構成としたため、ユーザが指を動かすことにより、体感的に初期映像の動作・変更等の処理指示を出すことが可能となる。
9. One VR video space generation system because the initial video is configured so that the video generation means conforms to a general file format and is generated from readable independent material that can be easily replaced without changing the system body. It is possible to display various VR video spaces and let the user experience them.
10. Since the sensor acquires the movement of the user's finger as tracking information and contrasts it with the preset action information, the user moves the finger to give a processing instruction such as the movement / change of the initial image. It becomes possible.
11.アクション情報が、ユーザの両手の指を同時に動作することにより発生する指示情報からなる構成としたため、両手を動かすことによる処理指示にのみ反応することとなり、誤った処理指示を出すリスクを低減できる。
12.アクション情報が、初期映像の進行および後退処理に対応する指示情報を含む構成としたため、VR映像空間内における画像の逐次変更を伴うプレゼンテーションやアミューズメント、エキシビション、トレーニングを、ユーザの手や指の動作のみで進行させることが可能となる。
11. Since the action information is composed of instruction information generated by simultaneously moving the fingers of both hands of the user, it reacts only to the processing instruction by moving both hands, and the risk of issuing an erroneous processing instruction can be reduced.
12. Since the action information is configured to include instruction information corresponding to the progress and backward processing of the initial video, presentations, amusements, exhibitions, and trainings that involve sequential changes of images in the VR video space can be performed only with the movements of the user's hands and fingers. It is possible to proceed with.
 以下、本発明に係るVR映像空間生成システムを、図面に示す実施例に基づいて詳細に説明する。図1aは、本発明に係るVR映像空間生成システムの概略図であり、図1bは、外部コンピュータを設けたVR映像空間生成システムの概略図である。図2は、VR映像の表示例を示す概略図であり、図3は、領域・位置情報を示す図である。図4aは、遠隔地のユーザに提供するVR映像空間生成システムの概略図であり、図4bは、外部コンピュータを設けた遠隔地のユーザに提供するVR映像空間生成システムの概略図である。図5は、映像素材や場面の切替処理を行うVR映像空間生成システムの概略図である。 Hereinafter, the VR video space generation system according to the present invention will be described in detail based on the examples shown in the drawings. FIG. 1a is a schematic diagram of a VR video space generation system according to the present invention, and FIG. 1b is a schematic diagram of a VR video space generation system provided with an external computer. FIG. 2 is a schematic view showing a display example of a VR image, and FIG. 3 is a diagram showing area / position information. FIG. 4a is a schematic diagram of a VR video space generation system provided to a remote user, and FIG. 4b is a schematic diagram of a VR video space generation system provided to a remote user provided with an external computer. FIG. 5 is a schematic diagram of a VR video space generation system that performs switching processing of video materials and scenes.
 本発明に係るVR映像空間生成システム1は、図1aおよび図1bに示すように、VR視聴装置100と、映像生成手段200と、位置情報取得手段300と、VR映像生成手段400と、映像出力手段500と、からなり、バーチャルリアリティ(VR)技術を用いた仮想空間を構成するためのシステムである。一または複数のユーザがアクセス可能な仮想空間を構成するためのVR映像を生成するものであり、生成されたVR映像空間内を一または複数のユーザが自由に移動・動作等を行い、恰も仮想空間内にいるような感覚をユーザに与えることを可能としたシステムである。本発明に係るVR映像空間生成システムにより、仮想空間内でのユーザのコミュニケーションや、プレゼンテーションの実施、アトラクションの体験、エキシビション、トレーニング等などを実現することが可能となる。 As shown in FIGS. 1a and 1b, the VR video space generation system 1 according to the present invention includes a VR viewing device 100, a video generation means 200, a position information acquisition means 300, a VR video generation means 400, and a video output. It is a system for constructing a virtual space using virtual reality (VR) technology, which comprises means 500 and means. It generates a VR video for constructing a virtual space that can be accessed by one or more users, and one or more users can freely move and operate in the generated VR video space, and it is also virtual. It is a system that makes it possible to give the user the feeling of being in space. The VR video space generation system according to the present invention makes it possible to realize user communication in a virtual space, presentation, experience of attraction, exhibition, training, and the like.
 なお、本発明で表示する「VR」映像とは、バーチャルリアリティ(仮想現実)の他、AR(Augmented Reality:拡張現実)、MR(Mixed Reality:複合現実)、SR(Substitutional Reality:代替現実)も含む概念としている。 In addition to virtual reality, the "VR" image displayed in the present invention also includes AR (Augmented Reality), MR (Mixed Reality), and SR (Substitutional Reality). It is a concept that includes.
 VR視聴装置100は、ユーザが装着する一または複数からなる装置であり、初期映像10をもとに生成されたVR映像20を再生する為に用いられる映像を再生・表示するための装置である。本実施例の装置は、主にゴーグル型の映写装置からなるが、これに限定されることはなく、例えば、網膜に対して直接結像・投影させる網膜投影方式など、ユーザが装着することが可能で、かつ映像への没入感を得られる映写装置であれば、適宜選択して使用することが可能である。 The VR viewing device 100 is a device composed of one or a plurality of devices worn by a user, and is a device for playing / displaying a video used for playing a VR video 20 generated based on the initial video 10. .. The device of this embodiment mainly consists of a goggle-type projection device, but is not limited to this, and may be worn by a user, for example, a retinal projection method for directly forming and projecting an image on the retina. Any projection device that is possible and gives a feeling of immersion in the image can be appropriately selected and used.
 VR視聴装置100は、本実施例では、図1aに示すように、演算装置(図示せず)と任意の記憶媒体からなる記憶手段610を装備しており、該記憶手段610には初期映像10およびこれから生成されたVR映像20が記憶される。また、VR視聴装置100は、VR映像を視聴可能とするための映写装置であり、本実施例では、主にゴーグル型の映写装置とする構成となっているが、これに限定されることはなく、他の構造の映写装置を用いる事が可能である。また、VR視聴装置100に表示される映像は主に360度の全部または一部視聴可能な映像からなる構成であり、また、(1)平面投影版(テキスト、イラスト及び/又は2D映像)、(2)3D映像、(3)3次元全天球ドーム映像のいずれからなる構成とすることも可能である。 In this embodiment, as shown in FIG. 1a, the VR viewing device 100 is equipped with a storage means 610 including an arithmetic unit (not shown) and an arbitrary storage medium, and the storage means 610 is equipped with an initial image 10 And the VR image 20 generated from this is stored. Further, the VR viewing device 100 is a projection device for enabling viewing of VR images, and in the present embodiment, it is mainly configured as a goggle type projection device, but the present invention is not limited to this. However, it is possible to use a projection device having another structure. Further, the image displayed on the VR viewing device 100 is mainly composed of an image that can be viewed in whole or in part at 360 degrees, and (1) a plane projection version (text, illustration and / or 2D image). It is possible to configure either (2) a 3D image or (3) a three-dimensional spherical dome image.
 VR視聴装置100は、図1bに示すように、コンピュータ600又はクラウドに、無線または有線で接続される構成とすることも可能である。この構成の場合、コンピュータ600又はクラウド上に、演算装置(図示せず)と任意の記憶媒体からなる記憶手段610を装備するものであり、該記憶手段610に初期映像10およびこれから生成されたVR映像20が記憶される構成であり、また、VR視聴装置100の位置情報等を取得・演算・管理することに用いられる構成となる。このとき、初期映像10およびこれから生成されたVR映像20はVR視聴装置100で管理・演算・保持することも可能である。 As shown in FIG. 1b, the VR viewing device 100 can be configured to be wirelessly or wiredly connected to the computer 600 or the cloud. In the case of this configuration, the computer 600 or the cloud is equipped with a storage means 610 composed of an arithmetic unit (not shown) and an arbitrary storage medium, and the storage means 610 is equipped with an initial image 10 and a VR generated from the initial image 10. It is a configuration in which the video 20 is stored, and is also a configuration used for acquiring, calculating, and managing the position information and the like of the VR viewing device 100. At this time, the initial video 10 and the VR video 20 generated from the initial video 10 can be managed, calculated, and held by the VR viewing device 100.
 映像生成手段200は、VR視聴装置100に表示可能な初期映像10を生成する装置である。初期映像10は、本実施例では、平面投影映像、3D映像、またはドーム映像などの素材(画像・動画等)からなる構成であり、例えば、360度全天球映像等とすることが可能であり、また、平面的な映像とすることも可能であるし、3dof、6dofの何れとすることも可能であるし、視差を伴う映像とすることも可能である。映像生成手段200は、これらの基礎映像データをVR視聴装置100に表示可能な形式である平面的なアウトプット映像(360度映像等も平面的なアウトプット映像の一種)に変更することで初期映像10を生成する。 The image generation means 200 is a device that generates an initial image 10 that can be displayed on the VR viewing device 100. In this embodiment, the initial image 10 is composed of materials (images, moving images, etc.) such as a plane projection image, a 3D image, or a dome image, and can be, for example, a 360-degree all-sky image. Yes, it can be a flat image, it can be any of 3 dof and 6 dof, and it can be an image with disparity. The image generation means 200 initially changes these basic image data into a flat output image (360 degree image or the like is also a kind of flat output image) which is a format that can be displayed on the VR viewing device 100. Generate video 10.
 映像生成手段200は、本実施例では、VR視聴装置100に組み込まれた記憶媒体610に保存された各種データを基にVR視聴装置100の演算装置が演算処理して初期映像10を生成する構成であるが、この形式に限定されることはなく、例えば、外部にコンピュータ600又はクラウドを設け、コンピュータ600又はクラウド上の記憶媒体610に保存された各種データを基に演算装置が演算処理して初期映像10を生成する構成とすることも可能である。また、映像生成手段200は、本実施例では、演算装置が記憶媒体610から読み出して処理するソフトウェアからなる構成であるが、これに限定されることはない。 In this embodiment, the image generation means 200 is configured such that the arithmetic unit of the VR viewing device 100 performs arithmetic processing based on various data stored in the storage medium 610 incorporated in the VR viewing device 100 to generate the initial image 10. However, the format is not limited to this, and for example, a computer 600 or a cloud is provided outside, and an arithmetic unit performs arithmetic processing based on various data stored in the storage medium 610 on the computer 600 or the cloud. It is also possible to have a configuration that generates the initial video 10. Further, in the present embodiment, the video generation means 200 is configured to include software that the arithmetic unit reads from the storage medium 610 and processes the video, but the video generation means 200 is not limited thereto.
 位置情報取得手段300は、VR視聴装置100の各々の位置情報Pを取得する手段である。本発明に係るVR映像空間生成システム1のユーザは、VR視聴装置100を装着しており、各々のVR視聴装置又は外部のセンサーによりにより、各ユーザの身体の特定の部位に関する現在位置情報を把握することができ、位置情報Pを基に、身体の他の各部位の位置を演算し、VR映像生成手段が、前記情報をもとに描画を行い、VR視聴装置100に各ユーザのアバターVを表示させることが可能となる。 The position information acquisition means 300 is a means for acquiring each position information P of the VR viewing device 100. The user of the VR video space generation system 1 according to the present invention is equipped with the VR viewing device 100, and grasps the current position information regarding a specific part of the body of each user by each VR viewing device or an external sensor. The position of each other part of the body is calculated based on the position information P, the VR image generation means draws based on the information, and the avatar V of each user is displayed on the VR viewing device 100. Can be displayed.
 位置情報取得手段300は、本実施例では、各VR視聴装置100に装備されており、VR視聴装置100のカメラによって位置情報Pを算出する構成であるが、これに限定されることはなく、外部にセンサーを設置のうえレーザー照射によるトラッキングを行う等他の技術を用いる構成とすることも可能である。また、位置情報取得手段300は、本実施例では、VR視聴装置100に組み込まれた演算装置が記憶媒体610から読み出して処理するソフトウェアからなる構成であるが、これに限定されることはない。 In this embodiment, the position information acquisition means 300 is equipped in each VR viewing device 100, and the position information P is calculated by the camera of the VR viewing device 100, but the present invention is not limited to this. It is also possible to install a sensor on the outside and use other technologies such as tracking by laser irradiation. Further, in the present embodiment, the position information acquisition means 300 is configured to include software that is read from the storage medium 610 by the arithmetic unit incorporated in the VR viewing device 100 and processed, but is not limited thereto.
 また、位置情報取得手段300は、図1a、図1bに示すように、遠隔地におけるユーザの位置情報を取得して演算・管理する構成を含むことが可能である。この場合、遠隔地にいるユーザの位置情報を取得するとともに、位置情報取得手段300が較正処理を行い、他のユーザが存在するVR空間に位置情報を割り当てて、あたかもその空間に存在するように映像処理を行う。遠隔地において移動をした場合には、同様に、較正処理を行ってVR空間内を同じように移動しているように映像処理を行う。この構成とすることにより、遠隔地にいる一または複数のユーザが、アバター映像Vを介して、同時にVR映像空間を体験・共有することが可能となり、離れた位置にいるユーザ同士であっても、生成されたVR映像空間内を自由に移動・動作等することが可能となり、遠隔地における複数のユーザを一のVR空間に集めて相互にコミュニケーションを取ったりすることで、空間内における体験を共有することが可能となった。 Further, as shown in FIGS. 1a and 1b, the position information acquisition means 300 can include a configuration for acquiring, calculating and managing the position information of the user in a remote place. In this case, the position information of the user in a remote place is acquired, the position information acquisition means 300 performs the calibration process, and the position information is assigned to the VR space in which another user exists so as to exist in that space. Perform video processing. When moving in a remote place, the calibration process is performed in the same manner, and the image processing is performed so as to move in the same way in the VR space. With this configuration, one or more users in remote areas can experience and share the VR image space at the same time via the avatar image V, even if the users are in remote positions. , It is possible to freely move and operate in the generated VR image space, and by gathering multiple users in a remote place in one VR space and communicating with each other, the experience in the space can be experienced. It became possible to share.
 なお、この場合、演算装置は、各ユーザのキャリブレーションされた世界の中心をずらす処理(例えば、現実世界で右に数センチ、左に数センチ動くと、VR空間での世界の中心から右に数センチ、左に数センチ動く)や、別のユーザのアバターや風景、オブジェクトと、ユーザとが重なる場合にこれらを透明化(薄く)する処理を行う構成とすることが可能であり、その他、ユーザのVRへの没入感を高める映像処理を行う構成を取り込むことも可能である。 In this case, the arithmetic unit shifts the center of the calibrated world of each user (for example, if it moves a few centimeters to the right and a few centimeters to the left in the real world, it moves from the center of the world in VR space to the right. (Move a few centimeters, a few centimeters to the left), or when the user overlaps with another user's avatar, landscape, or object, it is possible to make it transparent (thin), etc. It is also possible to incorporate a configuration that performs video processing that enhances the user's immersive feeling in VR.
 前述の較正処理は、後述する位置定義手段320が行う構成としてもよい。この構成とすることにより、VR視聴装置100の各位置情報PをXYZ軸からなる座標として数値的に定義することが可能となる。 The above-mentioned calibration process may be configured by the position defining means 320 described later. With this configuration, each position information P of the VR viewing device 100 can be numerically defined as the coordinates consisting of the XYZ axes.
 VR映像生成手段400は、映像生成手段200で生成された初期映像10にアバター映像Vを合成してVR映像20を生成するための手段である。VR映像生成手段400は、位置情報取得手段300によって取得された各VR視聴装置100のそれぞれの位置情報Pに基づいて、初期映像10内に配置すべき位置や向き(あるいは姿勢)を特定した上で、初期映像10に各ユーザのアバター映像Vを合成したVR映像20を生成する。これにより、VR視聴装置100を装着した各ユーザの現実的な位置が初期映像10内に反映された状態のVR映像20が生成されることとなり、図2に示すように、各ユーザがVR映像20からなる仮想空間内に参加した状態を構成することが可能となる。すなわち、ユーザは、他のユーザが表示される仮想空間の映像であって、自らがいる仮想空間内の位置から見える他人のアバターを含む仮想空間の映像を視聴できることとなり、恰も仮想空間内に入り込んだかのような感覚を得ることが可能となり、VR映像20への没入感が高まる事となる。 The VR image generation means 400 is a means for synthesizing the avatar image V with the initial image 10 generated by the image generation means 200 to generate the VR image 20. The VR image generation means 400 specifies a position and an orientation (or a posture) to be arranged in the initial image 10 based on the position information P of each VR viewing device 100 acquired by the position information acquisition unit 300. Then, a VR image 20 is generated by synthesizing the avatar image V of each user with the initial image 10. As a result, a VR image 20 in a state in which the realistic position of each user wearing the VR viewing device 100 is reflected in the initial image 10 is generated, and as shown in FIG. 2, each user is a VR image. It is possible to configure a state of participating in a virtual space consisting of 20. That is, the user can view the image of the virtual space displayed by another user and includes the image of the virtual space including the avatar of another person that can be seen from the position in the virtual space where he / she is, and he / she also enters the virtual space. It becomes possible to obtain a feeling as if it were, and the immersive feeling in the VR image 20 is enhanced.
 VR映像生成手段400は、本実施例では、本実施例では、VR視聴装置100に組み込まれた演算装置が記憶媒体610から読み出して処理するソフトウェアからなる構成であるが、これに限定されることはなく、例えば、外部にコンピュータ600又はクラウドを設け、コンピュータ600に装備された又はクラウド上のVR映像生成手段400が演算処理してVR映像20を生成する構成とすることも可能である。また、本発明に係るVR映像空間生成システムを構成する各機能の一部のみをクラウド上に移行した上で、各処理を行う構成とすることも可能である。更に、コンピュータ600又はクラウドは、位置情報Pを取得して管理し、VR映像20は、その情報を取得したVR視聴装置100が生成する構成とすることも可能であり、その他、あらゆる態様でVR映像20を生成する構成を選択することが可能である。 In the present embodiment, the VR image generation means 400 is configured to include software that is read from the storage medium 610 and processed by the arithmetic unit incorporated in the VR viewing device 100, but is limited to this. However, for example, a computer 600 or a cloud may be provided externally, and the VR image generation means 400 installed in the computer 600 or on the cloud may perform arithmetic processing to generate the VR image 20. Further, it is also possible to perform each process after migrating only a part of each function constituting the VR video space generation system according to the present invention to the cloud. Further, the computer 600 or the cloud can acquire and manage the location information P, and the VR video 20 can be configured to be generated by the VR viewing device 100 that has acquired the information, and VR can be generated in any other manner. It is possible to select a configuration for generating the moving image 20.
 映像出力手段500は、VR映像生成手段400により生成された、初期映像10に各ユーザのアバター映像Vが合成されたVR映像20を、各VR視聴装置100に出力する手段である。各VR視聴装置100の位置や向きが異なるため、VR視聴装置100に出力される映像はそれぞれ異なることとなる(図2参照)。映像出力手段500は、本実施例では、VR視聴装置100に組み込まれた演算装置が記憶媒体610から読み出して処理するソフトウェアからなる構成であるが、この構成に限定されることはなく、例えば、外部にコンピュータ600を設け、コンピュータ600に装備された映像出力手段500が演算処理して、有線または無線で、VR視聴装置100にVR映像20を出力する構成とすることも可能であるし、同様の処理をクラウド上で行うことも可能である。 The video output means 500 is a means for outputting the VR video 20 generated by the VR video generation means 400, in which the avatar video V of each user is synthesized with the initial video 10, to each VR viewing device 100. Since the position and orientation of each VR viewing device 100 are different, the images output to the VR viewing device 100 will be different (see FIG. 2). In the present embodiment, the video output means 500 is configured by software that is read from the storage medium 610 by the arithmetic unit incorporated in the VR viewing device 100 and processed, but is not limited to this configuration, for example. It is also possible to provide an external computer 600 and have the video output means 500 equipped in the computer 600 perform arithmetic processing to output the VR video 20 to the VR viewing device 100 by wire or wirelessly. It is also possible to perform the processing on the cloud.
 次に、VR映像空間生成システム1の実施例の詳細について説明する。本発明に係るVR映像空間生成システム1は、図1aに示すように、ユーザが装着するVR視聴装置100に装着されたカメラによって取得した空間の映像を、演算装置が解析した上で、予め用意されたVR空間の映像と対応づけて空間映像をVR視聴装置100に投影する構成である。また、前述のように、各々のVR視聴装置100に装着されたカメラがデプス情報を取得し、視点位置に応じた仮想的に存在しているメッシュモデルを作成し、位置情報Pを取得、演算した上で、当該角度からみえる平面的なアウトプット映像(360度映像等も平面的なアウトプット映像の一種)を生成し表示させる構成である。 Next, the details of the embodiment of the VR video space generation system 1 will be described. As shown in FIG. 1a, the VR video space generation system 1 according to the present invention prepares in advance after the arithmetic unit analyzes the video of the space acquired by the camera mounted on the VR viewing device 100 worn by the user. It is configured to project the spatial image onto the VR viewing device 100 in association with the image of the VR space. Further, as described above, the camera mounted on each VR viewing device 100 acquires the depth information, creates a mesh model that virtually exists according to the viewpoint position, and acquires and calculates the position information P. Then, a flat output image (360 degree image or the like is also a kind of flat output image) that can be seen from the angle is generated and displayed.
 この構成により、各ユーザがVR視聴装置100を介して見る仮想空間が、他のユーザを含む現実の空間において見える風景と対応し、仮想空間の映像にアバター映像Vを組み合わせたVR映像20となる。 With this configuration, the virtual space that each user sees through the VR viewing device 100 corresponds to the scenery seen in the real space including other users, and becomes the VR image 20 that combines the image of the virtual space with the avatar image V. ..
 すなわち、この構成とすることにより、恰も仮想空間内にいるような感覚をユーザに与えることを可能とするとともに、現実世界と仮想世界が混在することとなり、生成されたVR映像空間内を一または複数のユーザが自由に移動・動作等することが可能となり、複数のユーザが仮想空間内で相互にコミュニケーションを取ることが可能となった。 That is, with this configuration, it is possible to give the user the feeling of being in the virtual space, and the real world and the virtual world are mixed, so that the generated VR image space is one or more. It has become possible for multiple users to freely move and operate, and for multiple users to communicate with each other in the virtual space.
 本発明の実施例として、VR視聴装置100が、各種情報を一元管理する構成とした場合の実施例を説明する。この場合、図1aに示すように、VR映像空間生成システム1は、VR視聴装置100内に領域定義手段310と、位置定義手段320と、対応付け手段410と、を備える。 As an embodiment of the present invention, an embodiment in the case where the VR viewing device 100 is configured to centrally manage various information will be described. In this case, as shown in FIG. 1a, the VR video space generation system 1 includes an area defining means 310, a position defining means 320, and an associating means 410 in the VR viewing device 100.
 領域定義手段310は、例えば、図3に示すように、VR映像空間生成システム1が生成する仮想空間において、ユーザが移動可能な領域Fを定義する手段である。本実施例では、例えば、XYZ軸からなる3次元座標として数値的に定義することが考えられる。このとき、領域Fは、現実の空間に設けられる一定の領域と一致している。該領域は、ユーザが任意に移動したり、または着座したりする実際の空間である。 The area defining means 310 is, for example, as shown in FIG. 3, a means for defining a user-movable area F in the virtual space generated by the VR video space generation system 1. In this embodiment, for example, it is conceivable to numerically define it as a three-dimensional coordinate consisting of the XYZ axes. At this time, the area F coincides with a certain area provided in the real space. The area is the actual space in which the user arbitrarily moves or sits.
 領域定義手段310は、この空間と一致する仮想的な空間を領域Fとして定義する。領域Fは、例えば、XYZ軸からなる座標として数値的に定義されるがこれに限定されことはなく、他の領域管理手段を用いてVR空間を把握し管理する構成とすることが可能である。 The area defining means 310 defines a virtual space that matches this space as the area F. The region F is numerically defined as, for example, coordinates consisting of XYZ axes, but is not limited to this, and it is possible to configure the VR space to be grasped and managed by using other region management means. ..
 領域定義手段310は、本実施例では、VR視聴装置100の演算装置が記憶媒体610から読み出して処理するソフトウェアからなり、特に、位置情報取得手段300に組み込まれるソフトウェアモジュール等からなる構成であるが、これに限定されることはなく、独立したソフトウェアや別途設けられるハードウェアに組み込まれるソフトウェア又はクラウド上において処理する構成としてもよい。 In this embodiment, the area defining means 310 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is configured to include a software module or the like incorporated in the position information acquisition means 300. However, the present invention is not limited to this, and it may be configured to be processed on independent software, software embedded in separately provided hardware, or the cloud.
 位置定義手段320は、VR視聴装置100の各位置情報PをXYZ軸からなる座標として数値的に定義する手段である。ユーザの位置情報のX値、Y値および/またはZ値が最大値を超える場合には、VR視聴装置100に警告メッセージを表示させるなどの例外処理を行う構成が考えられる。 The position defining means 320 is a means for numerically defining each position information P of the VR viewing device 100 as coordinates having XYZ axes. When the X value, Y value and / or Z value of the user's position information exceeds the maximum value, an exception handling such as displaying a warning message on the VR viewing device 100 can be considered.
 位置定義手段320は、本実施例では、VR視聴装置100の演算装置が記憶媒体610から読み出して処理するソフトウェアからなり、特に、位置情報取得手段300に組み込まれるソフトウェアモジュール等からなる構成であるが、これに限定されることはなく、独立したソフトウェアや別途設けられるハードウェアに組み込まれるソフトウェア又はクラウド上おいて処理する構成とすることももちろん可能である。 In this embodiment, the position defining means 320 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is composed of a software module or the like incorporated in the position information acquiring means 300. However, the present invention is not limited to this, and it is of course possible to use independent software, software embedded in separately provided hardware, or a configuration for processing on the cloud.
 対応付け手段410は、領域Fに一または複数の位置情報Pを導入し適用した上で、初期映像10に対応づけるための手段である。詳細には、対応付け手段410は、領域定義手段310によって定義された領域Fに対して、位置定義手段320によって定義された位置情報Pを座標値として導入(適用)する。そのうえで、領域Fと各位置情報Pとを、映像生成手段200によって生成された初期映像10に対応付ける。具体的には、例えば、本実施例ではインバースキネマティクスに係る技術を用いる構成が可能である。なお、この対応付けの詳細については後述する。 The associating means 410 is a means for introducing and applying one or a plurality of position information Ps to the area F and then associating them with the initial image 10. Specifically, the associating means 410 introduces (applies) the position information P defined by the position defining means 320 as a coordinate value to the area F defined by the area defining means 310. Then, the area F and each position information P are associated with the initial image 10 generated by the image generation means 200. Specifically, for example, in this embodiment, a configuration using a technique related to inverse kinematics is possible. The details of this mapping will be described later.
 例えば、現実的な空間にVR視聴装置100を装着した複数のユーザがいる場合に、現実の空間と一致するように定義される領域F中に、それぞれの位置情報Pが現実の空間に存在するユーザの位置と一致するように定義される。その上で、それら各情報を初期映像10に適用するため、初期映像10中に各ユーザのアバター映像Vが実際にいるような位置に表示されることとなる。 For example, when there are a plurality of users wearing the VR viewing device 100 in a real space, each position information P exists in the real space in the area F defined to match the real space. Defined to match the user's position. Then, in order to apply each of the information to the initial image 10, the avatar image V of each user is displayed at a position where the avatar image V of each user is actually present in the initial image 10.
 対応付け手段410は、本実施例では、VR視聴装置100の演算装置が記憶媒体610から読み出して処理するソフトウェアからなり、特に、VR映像生成手段400に組み込まれるソフトウェアモジュール等からなる構成であるが、これに限定されることはなく、独立したソフトウェアや別途設けられるハードウェアに組み込みまれるソフトウェア又はクラウド上において処理する構成とすることももちろん可能である。 In this embodiment, the associating means 410 is composed of software that is read from the storage medium 610 by the arithmetic unit of the VR viewing device 100 and processed, and in particular, is composed of a software module or the like incorporated in the VR video generating means 400. However, the present invention is not limited to this, and it is of course possible to use independent software, software embedded in separately provided hardware, or a configuration for processing on the cloud.
 本発明の実施例として、コンピュータ600又はクラウド上にて、各種情報を管理する構成とすることが可能である。この場合、図1bに示すように、VR映像空間生成システム1は、コンピュータ600内に領域定義手段310と、位置定義手段320と、対応付け手段410と、を備える構成である。 As an embodiment of the present invention, it is possible to have a configuration for managing various information on a computer 600 or a cloud. In this case, as shown in FIG. 1b, the VR video space generation system 1 is configured to include an area defining means 310, a position defining means 320, and an associating means 410 in the computer 600.
 更に別の実施例として、コンピュータ600又はクラウド上にて位置定義手段320による位置情報Pの管理等を行い、その他の演算処理は全てVR視聴装置100で行う構成とすることも可能である。 As yet another embodiment, it is also possible to manage the position information P by the position defining means 320 on the computer 600 or the cloud, and perform all other arithmetic processing on the VR viewing device 100.
 本発明の別の実施例として、VR映像空間生成システム2は、図4aおよび図4bに示すように、位置情報取得手段300を用いない構成とすることが可能である。すなわち、各ユーザが装着するVR視聴装置100のそれぞれの今現在の現実的な位置を取得しない構成とし、位置定義手段320が仮想空間内におけるユーザの仮想的な位置情報Pを演算してXYZ軸からなる座標として数値的に定義し、この位置情報Pを用いて、VR映像生成手段400は、映像生成手段200で生成された初期映像10に、VR視聴装置100を装着する各ユーザのアバター映像Vを合成してVR映像20を生成する。このとき、その場にいない遠隔地のユーザのアバターの各々が重ならないように、任意の位置を設定することが考えられる。 As another embodiment of the present invention, the VR video space generation system 2 can be configured without using the position information acquisition means 300 as shown in FIGS. 4a and 4b. That is, the configuration is such that the current and current realistic position of each VR viewing device 100 worn by each user is not acquired, and the position defining means 320 calculates the user's virtual position information P in the virtual space and XYZ axes. The VR image generation means 400 uses the position information P to numerically define the coordinates as the coordinates consisting of the coordinates, and the VR image generation means 400 attaches the VR viewing device 100 to the initial image 10 generated by the image generation means 200 as an avatar image of each user. V is combined to generate a VR image 20. At this time, it is conceivable to set an arbitrary position so that the avatars of users in remote areas who are not there do not overlap each other.
 本発明に係る実施例として、図4aに示すように、VR視聴装置100が各種情報を一元管理する構成について説明する。この構成において、位置定義手段320は、他のVR視聴装置100と通信を行いつつ、位置情報Pを現実の空間の任意の箇所にいることと仮定して、任意に位置情報Pを設定し、XYZ軸からなる座標として数値的に定義する。この構成とすることにより、遠隔地にいる一または複数のユーザが、アバター映像Vを介して、同時にVR映像空間を体験・共有することが可能となり、離れた位置にいるユーザ同士であっても、生成されたVR映像空間内を自由に移動・動作等することが可能となり、遠隔地における複数のユーザを一のVR空間に集めて相互にコミュニケーションを取ったりすることで、空間内における体験を共有することが可能となる。 As an example according to the present invention, as shown in FIG. 4a, a configuration in which the VR viewing device 100 centrally manages various types of information will be described. In this configuration, the position defining means 320 arbitrarily sets the position information P on the assumption that the position information P is at an arbitrary place in the real space while communicating with the other VR viewing device 100. It is numerically defined as the coordinates consisting of the XYZ axes. With this configuration, one or more users in remote areas can experience and share the VR image space at the same time via the avatar image V, even if the users are in remote positions. , It is possible to freely move and operate in the generated VR image space, and by gathering multiple users in a remote place in one VR space and communicating with each other, the experience in the space can be experienced. It will be possible to share.
 なお、この場合、演算装置は、各ユーザのキャリブレーションされた世界の中心をずらす処理や、別のユーザのアバターや風景、オブジェクトと、ユーザとが重なる場合にこれらを透明化(薄く)する処理を行う構成とすることが可能であり、その他、ユーザのVRへの没入感を高める映像処理を行う構成を取り込むことも可能である。なお、上記キャリブレーション処理は、位置定義手段320が行う構成としてもよい。この構成により、VR視聴装置100の各位置情報PをXYZ軸からなる座標として数値的に定義することが可能となる。 In this case, the arithmetic unit performs a process of shifting the center of the calibrated world of each user, and a process of making (thinning) the avatar, landscape, or object of another user when the user overlaps with the user. In addition, it is also possible to incorporate a configuration for performing video processing that enhances the user's immersive feeling in VR. The calibration process may be performed by the position defining means 320. With this configuration, each position information P of the VR viewing device 100 can be numerically defined as the coordinates consisting of the XYZ axes.
 また、別の実施例として、図4bに示すように、外部のコンピュータ600を設け又はクラウドを利用して、各種情報を一元管理する構成とすることが可能である。この構成においては、コンピュータ600又はクラウド上の位置定義手段320は、各VR視聴装置100と通信を行った上で、位置情報Pを、現実の空間の任意の箇所にいることと仮定して、任意に設定し、XYZ軸からなる座標として数値的に定義する構成である。 Further, as another embodiment, as shown in FIG. 4b, it is possible to provide a configuration in which an external computer 600 is provided or a cloud is used to centrally manage various types of information. In this configuration, it is assumed that the computer 600 or the position defining means 320 on the cloud communicates with each VR viewing device 100 and then the position information P is located at an arbitrary place in the real space. It is configured to be arbitrarily set and numerically defined as the coordinates consisting of the XYZ axes.
 本発明に係る位置定義手段320は、VR視聴装置100を装着するユーザの、身体の特定の部位の位置などに係る情報を取得したうえ、身体の他の各部位の位置を演算する構成である。対応付け手段410は、領域Fとこれらの情報を含む位置情報Pとを、映像生成手段200によって生成された初期映像10に対応付ける。VR映像生成手段400は、これらの情報をもとに描画を行い、VR視聴装置100に表示するVR映像20を生成する。この構成とすることにより、ユーザが装着するVR視聴装置100に表示されるVR映像20がユーザの動きに合わせた精確ものとなり、あたかも仮想空間が現実であるかのようなVR映像20への没入感を得ることが可能となる。 The position defining means 320 according to the present invention is configured to acquire information related to the position of a specific part of the body of the user wearing the VR viewing device 100 and to calculate the position of each other part of the body. .. The associating means 410 associates the area F with the position information P including these information with the initial image 10 generated by the image generating means 200. The VR image generation means 400 draws based on these information and generates a VR image 20 to be displayed on the VR viewing device 100. With this configuration, the VR image 20 displayed on the VR viewing device 100 worn by the user becomes accurate according to the movement of the user, and the user is immersed in the VR image 20 as if the virtual space is real. It is possible to get a feeling.
 アバター映像Vの生成には、本実施例では、インバースキネマティクスに係る技術を用いることが可能である。インバースキネマティクスとは、階層構造をもつオブジェクトにおいて下位オブジェクトのターゲット位置を指定することで上位オブジェクトの位置や回転角度などを計算する技術であり、アバター映像Vの動作の演算に用いている。本実施例では、VR視聴装置100またはコンピュータ600には、演算装置が組み込まれており、位置定義手段320が、ユーザの身体の特定の部位の位置などに係る情報を取得する。これらの情報を演算装置がインバースキネマティクス技術を用いて演算処理し、身体の他の各部位の位置を特定した上で、対応付け手段410が対応付け処理を行った上で、VR視聴装置100に表示するVR映像20を生成する構成である。 In this embodiment, the technique related to inverse kinematics can be used to generate the avatar video V. Inverse kinematics is a technique for calculating the position and rotation angle of a higher-level object by designating the target position of a lower-level object in an object having a hierarchical structure, and is used for calculating the operation of the avatar video V. In this embodiment, the VR viewing device 100 or the computer 600 incorporates an arithmetic unit, and the position defining means 320 acquires information related to the position of a specific part of the user's body. The arithmetic unit performs arithmetic processing on this information using the inverse kinematics technique, identifies the positions of other parts of the body, and the mapping means 410 performs the mapping processing, and then the VR viewing device 100. It is a configuration to generate a VR image 20 to be displayed in.
 なお、以上の構成とすることにより、一部のユーザは、現実の空間に重ね合わされた仮想空間内を体験し、他のユーザは、遠隔地に設けられた現実の空間からその仮想空間に参加する構成とすることが可能となる。 With the above configuration, some users experience the virtual space superimposed on the real space, and other users participate in the virtual space from the real space provided in a remote place. It is possible to make a configuration that does.
 例えば、仮想空間において、あるプレゼンテーションを複数のユーザが視聴する状況において、プレゼンテーション会場にいるユーザは、現実の空間に重ね合わされた仮想空間内でVR視聴装置100によってプレゼンテーションを視聴し、遠隔地にいるユーザも同様に現実の空間に重ね合わされた同一の仮想空間内でVR視聴装置100によってプレゼンテーションを視聴する。遠隔地にいるユーザは、アバター映像VがVR映像20に表示されるため、プレゼンテーション会場にいるユーザは遠隔地にいるユーザも同じ場所にいるように認識できる。一方、遠隔地にいるユーザは、VR視聴装置100を通して、プレゼンテーション会場にいる体験を得られる。 For example, in a situation where a plurality of users watch a presentation in a virtual space, the user in the presentation venue watches the presentation by the VR viewing device 100 in the virtual space superimposed on the real space, and is in a remote place. Similarly, the user also watches the presentation by the VR viewing device 100 in the same virtual space superimposed on the real space. Since the avatar image V is displayed on the VR image 20 for the user in the remote place, the user in the presentation venue can recognize that the user in the remote place is also in the same place. On the other hand, a user in a remote place can get the experience of being in the presentation venue through the VR viewing device 100.
 本実施例では、VR映像空間生成システム1・2は、6dofまたは3dofに対応する構成となっている。3dofとは、X軸、Y軸、Z軸の3つの動きに対応したVR視聴装置100であり、VR視聴装置100を装着した頭の回転や傾きを感知する。また、6dofとは、3dofの動きに加えて、X軸、Y軸、Z軸方向の移動を加えた、6つの動きに対応する構成である。6dofに対応することで、VR視聴装置100を装着したユーザがあらゆる動きをした場合、仮想空間内における高い没入感を得ることが可能となる。また、3dof対応としても、十分な没入感を得ることができる。 In this embodiment, the VR video space generation systems 1 and 2 have a configuration corresponding to 6 dof or 3 dof. The 3DOF is a VR viewing device 100 corresponding to three movements of the X-axis, the Y-axis, and the Z-axis, and senses the rotation and tilt of the head equipped with the VR viewing device 100. Further, the 6 dof has a configuration corresponding to 6 movements in which movements in the X-axis, Y-axis, and Z-axis directions are added in addition to the movements of 3 dof. By supporting 6DOF, it is possible to obtain a high immersive feeling in the virtual space when the user wearing the VR viewing device 100 makes any movement. In addition, a sufficient immersive feeling can be obtained even if it is compatible with 3DOF.
 特に、ユーザに6dofを体験させてから、既存の3dof映像を見せることにより、ユーザが6dofによる没入感を維持しながら3dof映像を楽しむことができる。すなわち、6dofの体験による錯覚を利用して、既存の多数の3dof映像を6dofによる没入感を維持しつつ見せる構成とすることが可能である。 In particular, by letting the user experience 6DOF and then showing the existing 3DOF image, the user can enjoy the 3DOF image while maintaining the immersive feeling of 6DOF. That is, it is possible to use the illusion of the experience of 6DOF to show a large number of existing 3DOF images while maintaining the immersive feeling of 6DOF.
 また、例えば、撮影した3dof映像で見渡せる範囲に写っているものと同等の映像を、6dofにてCG生成したうえ、6dof体験の中で3dofを撮影したポジションに誘導し、没入感を維持できる最適なポジションで6dofから3dofに差し替えを行い、これを連続的に見せることで6dofによる没入感を維持しつつ3dofを見せる構成とすることが可能である。 Also, for example, an image equivalent to what is shown in the range that can be seen in the captured 3dof image is CG generated in 6dof, and then the 3dof is guided to the position where the 3dof was photographed in the 6dof experience, and the immersive feeling can be maintained. It is possible to replace 6dof with 3dof in a suitable position and show this continuously to show 3dof while maintaining the immersive feeling of 6dof.
 また、本実施例では、音声を共有することが可能である。すなわち、ユーザが発した声や、VR映像内で流れる音、音楽などが同時に、すべてのユーザのVR視聴装置100に設けられたスピーカー等によって再生される構成である。これにより、同じVR空間に参加するすべてのユーザが空間を共有する体験を得ることが可能となる。 Also, in this embodiment, it is possible to share voice. That is, the voice uttered by the user, the sound flowing in the VR image, the music, and the like are simultaneously reproduced by the speakers and the like provided in the VR viewing device 100 of all the users. This makes it possible for all users who participate in the same VR space to have an experience of sharing the space.
 すなわち、本発明に係るVR映像空間生成システム1・2により、恰も仮想空間内にいるような感覚を各ユーザに与え、ユーザが仮想空間内で相互にコミュニケーションを取ったり、遠隔地における複数のユーザを一のVR空間に集めたりすることが可能となり、VR空間内におけるプレゼンテーションやアミューズメント、エキシビション、トレーニングなどを体験することが可能となった。 That is, the VR video space generation systems 1 and 2 according to the present invention give each user the feeling of being in a virtual space, allowing the users to communicate with each other in the virtual space, and a plurality of users in remote locations. Can be collected in one VR space, and it is possible to experience presentations, amusements, exhibitions, training, etc. in the VR space.
 本発明に係るVR映像空間生成システム1は、インターネット等の外部ネットワークを用いることなく、ローカルのみで運用することも可能である。すなわち、インターネット等外部に接続可能なネットワーク環境にない状況であっても、VR映像空間生成システム1を用いてプレゼンテーション等を行う事が可能である。 The VR video space generation system 1 according to the present invention can be operated only locally without using an external network such as the Internet. That is, it is possible to give a presentation or the like using the VR video space generation system 1 even in a situation where there is no network environment that can be connected to the outside such as the Internet.
 初期映像10は、本実施例では、PNGやMP4等を含む一般的なファイルフォーマットにて生成され、VR映像空間生成システムの本体を変更せずに容易に差し替えできる、読取り可能な独立した、平面投影映像、3D映像、またはドーム映像等からなる構成である。この構成とすることにより、VR空間において、あらゆる形式、タイプの素材を、VR映像空間生成システム1・2を利用するユーザが体験することが可能となった。 In this embodiment, the initial video 10 is generated in a general file format including PNG, MP4, etc., and can be easily replaced without changing the main body of the VR video space generation system, and is a readable independent plane. It is composed of projected images, 3D images, dome images, and the like. With this configuration, it has become possible for users who use the VR video space generation systems 1 and 2 to experience materials of all formats and types in the VR space.
 また、初期映像10は、一般的なファイルフォーマットにて生成され、VR映像空間生成システム本体を変更せずに容易に差し替えできる、読取り可能な独立した、ユーザの指示によって切り替わるイラスト、2D画像および文字情報を含む平面投影映像や、ユーザの指示によって動作する2D動画からなる平面投影映像を含む構成とすることが可能である。また、ユーザの指示によって動作するCGの立体モデル、3D映像を含む構成としたり、ユーザの指示によって動作する全天球映像又はそのうちの一部の映像を含む構成とすることが可能である。 Further, the initial video 10 is generated in a general file format, and can be easily replaced without changing the VR video space generation system main body. It is a readable independent illustration, 2D image and characters that are switched according to the user's instruction. It is possible to configure a configuration including a plane projection image including information and a plane projection image composed of a 2D moving image that operates according to a user's instruction. Further, it is possible to configure the configuration to include a stereoscopic model of CG that operates according to the user's instruction, or to include a spherical image that operates according to the user's instruction or a part of the image.
 これら各映像は、PNGやMP4等を一般的なファイルフォーマットにて生成されるものであり、VR空間内(初期映像10内)に固定的に埋め込んだり、ユーザが顔を向けた方向に常に表示したり、あらゆる表示方法を選択することが可能な独立した映像である。また、任意に埋め込むことが可能なため、他のシステムを変更せずに容易に差し替え可能となっている。 Each of these images is generated by a general file format such as PNG or MP4, and is fixedly embedded in the VR space (in the initial image 10) or always displayed in the direction in which the user turns his face. It is an independent image that can be displayed or any display method can be selected. Moreover, since it can be embedded arbitrarily, it can be easily replaced without changing other systems.
 この構成とすることにより、仮想空間内に埋め込まれた平面投影映像、3D映像、全天球映像をユーザが視聴することが可能となり、これにより、自動車、不動産などの構造・仕様を紹介するためのプレゼンテーションやアミューズメント、エキシビション、トレーニングを、容易にユーザが体験し、共有することが可能となった。その他、様々なアトラクションを、複数のユーザが体験し共有することが可能となった。 With this configuration, the user can view the plane projection image, 3D image, and spherical image embedded in the virtual space, in order to introduce the structure and specifications of automobiles, real estate, etc. Presentations, amusements, exhibitions, and training can now be easily experienced and shared by users. In addition, it has become possible for multiple users to experience and share various attractions.
 更に、初期映像10は、映像生成手段200が読み取り可能な独立した映像データからなる構成とすることが可能である。この構成とすることにより、映像生成手段200が一般的なファイルフォーマットで生成された画像・動画データ等を含む既存の各種素材から任意の初期映像10を読み取ることが可能となり、一のVR映像空間生成システムで所望する任意のVR映像空間を表示させてユーザに体験させることが可能となった。 Further, the initial video 10 can be configured to be composed of independent video data that can be read by the video generation means 200. With this configuration, the video generation means 200 can read an arbitrary initial video 10 from various existing materials including image / video data generated in a general file format, and one VR video space. It has become possible to display an arbitrary VR video space desired by the generation system and let the user experience it.
 例えば、本発明に係るVR映像空間生成システム1・2では、ユーザが参加するVR空間内において、別形式で作成されたプレゼンテーション資料を固定位置挿入して表示させる(例えば、VR空間が講堂からなり、プレゼンテーション資料は、講堂の教壇後ろに固定して埋め込む)。これにより、VR空間内でユーザはプレゼンテーションを共有することが可能となり、プレゼンテーション資料は、容易に差し替えることが可能となった。 For example, in the VR video space generation systems 1 and 2 according to the present invention, presentation materials created in another format are inserted at a fixed position and displayed in the VR space in which the user participates (for example, the VR space is composed of an auditorium). , Presentation materials are fixed and embedded behind the lecture hall). As a result, the user can share the presentation in the VR space, and the presentation material can be easily replaced.
 また、例えば、VR空間内に映像を含ませる構成とすることも可能である。この構成により、例えば、身体性を伴う運動のトレーニングを本発明に係るVR映像空間生成システム1・2で体験する場合において、仮想空間内に埋め込まれるインストラクターの映像を常にユーザの前に表示させることが可能となり、ユーザがどのような体勢を取っても、常に特定の位置(例えば目の前)に投影映像を表示でき、ユーザにとって利便性の高いVR映像空間生成システム1・2を提供することが可能となった。 Also, for example, it is possible to include an image in the VR space. With this configuration, for example, when experiencing exercise training involving physicality with the VR video space generation system 1 and 2 according to the present invention, the video of the instructor embedded in the virtual space is always displayed in front of the user. It is possible to provide a VR video space generation system 1 and 2 that can always display a projected image at a specific position (for example, in front of the eyes) regardless of the user's posture and is highly convenient for the user. Is now possible.
 VR視聴装置100は、図3および図5に示すように、本実施例では、センサー110を備えている。センサー110は、VR視聴装置100を装着したユーザの両手の指の位置および動きを検知するセンサーである。センサー110は、VR視聴装置100を装着中のユーザの両手の手や指の動作を検知した上で、その動作に係る情報をトラッキング情報Tとして取得する。トラッキング情報Tは、ユーザの両手の指の動きに関する情報であって、一定時間内における手や指の一連の動作に係る情報からなる。なお、センサー100は、VR視聴装置100以外の他の装置に設置して、VR視聴装置100を装着したユーザの両手の指の位置および動きを検知する構成としてもよい。 As shown in FIGS. 3 and 5, the VR viewing device 100 includes a sensor 110 in this embodiment. The sensor 110 is a sensor that detects the position and movement of the fingers of both hands of the user wearing the VR viewing device 100. The sensor 110 detects the movements of the hands and fingers of the user wearing the VR viewing device 100, and then acquires the information related to the movements as the tracking information T. The tracking information T is information related to the movements of the fingers of both hands of the user, and is composed of information related to a series of movements of the hands and fingers within a certain period of time. The sensor 100 may be installed in a device other than the VR viewing device 100 to detect the positions and movements of the fingers of both hands of the user who wears the VR viewing device 100.
 例えば、縦または横に一本線を引く動きや、8の字を描く動きなど、ある地点からある地点までの手や指の経路を、左右の手ごとにトレースしてトラッキング情報Tとして取得・保存する。 For example, the path of a hand or finger from a certain point to a certain point, such as the movement of drawing a single line vertically or horizontally or the movement of drawing a figure of eight, is traced for each of the left and right hands and acquired and saved as tracking information T. do.
 また、VR映像空間生成システム1・2は、アクション情報Aを保持している。アクション情報Aは、一定時間内における指の一連の動作をトラッキングした情報であり、本実施例では、複数パターンのアクション情報Aを保持する。アクション情報Aは、本実施例では、VR視聴装置100またはコンピュータ600の記憶媒体610に記憶されるデータからなる。 In addition, the VR video space generation systems 1 and 2 hold the action information A. The action information A is information that tracks a series of finger movements within a certain period of time, and in this embodiment, a plurality of patterns of action information A are retained. In this embodiment, the action information A is composed of data stored in the storage medium 610 of the VR viewing device 100 or the computer 600.
 複数からなるアクション情報Aは、本実施例では、各々初期映像10の特定の変化処理に対応付けられる構成である。また、取得したトラッキング情報Tが、保存してあるアクション情報Aの何れかと一致した場合に、対応付けられている初期映像10の変化処理を行う構成である。 In this embodiment, each of the plurality of action information A is configured to be associated with a specific change process of the initial image 10. Further, when the acquired tracking information T matches any of the stored action information A, the change processing of the associated initial video 10 is performed.
 例えば、初期映像10がプレゼンテーション画像等を含み、アクション情報Aが初期映像の進行および後退処理に対応付けられる場合を想定する。このとき、指を右から左へ動作させるアクションがアクション情報Aとして登録・保持されており、そのアクション情報Aが、初期映像10中の一部分に表示されるプレゼンテーション映像を逐次切り替えるような画像の変化処理に対応付けられているとした場合、VR視聴装置100を装着したユーザが指を動かすと、センサー110がその指の位置および動きを検知した上で、その動作に係る情報をトラッキング情報Tとして取得する。VR映像空間生成システム1・2は、その指の動きをトレースしたトラッキング情報Tとアクション情報Aとを対比する処理を行い、それらが同じと判断した場合に、対応付けられたプレゼンテーション映像を逐次切り替える画像の処理を行う構成である。 For example, assume that the initial video 10 includes a presentation image and the like, and the action information A is associated with the progress and backward processing of the initial video. At this time, the action of moving the finger from right to left is registered and held as the action information A, and the action information A changes the image so as to sequentially switch the presentation image displayed in a part of the initial image 10. Assuming that it is associated with the processing, when the user wearing the VR viewing device 100 moves the finger, the sensor 110 detects the position and movement of the finger, and the information related to the operation is used as the tracking information T. get. The VR video space generation systems 1 and 2 perform processing for comparing the tracking information T that traces the movement of the finger and the action information A, and when it is determined that they are the same, the associated presentation video is sequentially switched. It is a configuration that processes images.
 その他、親指と人差し指や中指を接触させて円形をつくる動作など、あらゆる動作をアクション情報Aとして登録・保持することが可能である。この構成とすることにより、仮想空間に参加するユーザのうちの一人がプレゼンテーションやアトラクション、エキシビション、トレーニングを行う場合において、該プレゼンテータが指を動かすだけで映像素材や場面を逐次切り替えることが可能となり、よりスムーズかつ訴求力のあるプレゼンテーションやアトラクション、エキシビション、トレーニングを行うことが可能となった。 In addition, it is possible to register and retain all movements as action information A, such as the movement of making a circle by contacting the thumb with the index finger or middle finger. With this configuration, when one of the users participating in the virtual space gives a presentation, attraction, exhibition, or training, the presenter can switch the video material or scene sequentially by simply moving his or her finger. It has become possible to provide smoother and more appealing presentations, attractions, exhibitions and training.
 アクション情報Aは、本実施例では、特に、ユーザの両手の指を同時に動作することにより発生する指示情報からなる構成である。例えば、初期映像10が複数枚からなるプレゼンテーション映像を含み、アクション情報Aがプレゼンテーション映像の進行および後退処理に対応付けられる場合を想定する。このとき、両手の親指と人差し指、または親指と中指を接触させる動作をアクション情報Aとして登録しておき、この動作をプレゼンテーション映像の進行(または後退)処理に対応させておく。仮想空間内でプレゼンテーションを行うユーザが、両手の親指と人差し指、または親指と中指を接触させる動作をすると、VR映像空間生成システム1・2は、その指の動きをトレースしたトラッキング情報Tを感知・取得するとともに、アクション情報Aとの対比を行い、一致していることを認定して、そのクション情報Aに対応付けられたプレゼンテーション映像の進行処理を行う。 In this embodiment, the action information A is configured to include instruction information generated by simultaneously operating the fingers of both hands of the user. For example, it is assumed that the initial video 10 includes a presentation video composed of a plurality of images, and the action information A is associated with the progress and backward processing of the presentation video. At this time, the action of contacting the thumb and the index finger of both hands or the thumb and the middle finger is registered as the action information A, and this action is made to correspond to the progress (or backward) processing of the presentation image. When a user giving a presentation in a virtual space touches the thumb and index finger of both hands, or the thumb and middle finger, the VR video space generation systems 1 and 2 detect tracking information T that traces the movement of the finger. At the same time as the acquisition, comparison with the action information A is performed, it is confirmed that they match, and the progress processing of the presentation video associated with the action information A is performed.
 片手のみからなるトラッキング情報Tを用いると、簡易的でスマートに初期映像10の変化処理を指示することが可能となるが、ユーザの動作が曖昧であったりすると、誤動作に繋がるという大きな問題点があった。両手を同時に用いるこの構成とすることにより、例えばプレゼンテーション時の画面の切り替え処理における誤動作や、ユーザが意図せぬ画像の変化を生じさせることを防止・抑制することが可能となった。 By using the tracking information T consisting of only one hand, it is possible to instruct the change processing of the initial image 10 simply and smartly, but if the user's operation is ambiguous, there is a big problem that it leads to a malfunction. there were. By adopting this configuration in which both hands are used at the same time, it is possible to prevent / suppress, for example, a malfunction in the screen switching process at the time of presentation and an unintended change in the image by the user.
 以上の構成とする事により、ユーザは、恰も仮想空間内にいるような感覚を得られることとなり、仮想空間内で相互にコミュニケーションを取ったり、遠隔地における複数のユーザが一のVR空間に集まったりすることができ、一般的なファイルフォーマットにて生成され、読取り可能な独立した画像・動画等を含む各種素材を、VR映像空間内にそのまま投影することで、恰も仮想空間内においてこれらを視聴、体験しているような感覚をユーザに与えたり、これらの素材の頁や素材自体、場面を両手指の同時操作にて切り替えたりすることで、VR空間内におけるプレゼンテーションやアミューズメント、エキシビション、トレーニングなどを非常に高い臨場感とともに体験し共有することが可能となった。 With the above configuration, the user can feel as if he / she is in the virtual space, communicate with each other in the virtual space, and multiple users in remote areas gather in one VR space. By projecting various materials, including independent images and videos that can be chilled, generated in a general file format, and readable, into the VR video space as they are, you can watch them in the virtual space. By giving the user the feeling of experiencing, and switching between the pages of these materials, the materials themselves, and the scenes by operating both hands at the same time, presentations, amusements, exhibitions, training, etc. in VR space, etc. It became possible to experience and share with a very high sense of reality.
本発明に係るVR映像空間生成システムの概略図Schematic diagram of the VR video space generation system according to the present invention 外部コンピュータを設けたVR映像空間生成システムの概略図Schematic diagram of a VR video space generation system equipped with an external computer VR映像の表示例を示す概略図Schematic diagram showing a display example of a VR image 領域・位置情報を示す図Diagram showing area / location information 遠隔地のユーザに提供するVR映像空間生成システムの概略図Schematic diagram of VR video space generation system provided to users in remote areas 外部コンピュータを設けた遠隔地のユーザに提供するVR映像空間生成システムの概略図Schematic diagram of a VR video space generation system provided to users in remote areas equipped with an external computer 映像素材や場面の切替処理を行うVR映像空間生成システムの概略図Schematic diagram of a VR video space generation system that switches video materials and scenes
 1・2  VR映像空間生成システム
 10  初期映像
 20  VR映像
 100  VR視聴装置
 110  センサー
 200  映像生成手段
 300  位置情報取得手段
 310  領域定義手段
 320  位置定義手段
 400  VR映像生成手段
 410  対応付け手段
 500  映像出力手段
 600  コンピュータ
 610  記憶手段
 A  アクション情報
 F  領域
 P  位置情報
 T  トラッキング情報
 V  アバター
1.2 VR video space generation system 10 Initial video 20 VR video 100 VR viewing device 110 Sensor 200 Video generation means 300 Position information acquisition means 310 Area definition means 320 Position definition means 400 VR video generation means 410 Correspondence means 500 Video output means 600 Computer 610 Storage means A Action information F Area P Location information T Tracking information V Avatar

Claims (12)

  1.  一または複数のユーザがアクセス可能な仮想空間を構成するためのVR映像を生成するVR映像空間生成システム(1)が、
     ユーザが装着する一または複数からなるVR視聴装置(100)と、
     前記VR視聴装置に表示可能な初期映像(10)を生成する映像生成手段(200)と、
     前記VR視聴装置の各々の位置情報(P)を取得する位置情報取得手段(300)と、
     前記位置情報取得手段により取得された各位置情報に基づき、前記映像生成手段で生成された初期映像にアバター映像(V)を合成してVR映像(20)を生成するVR映像生成手段(400)と、
     前記VR映像生成手段により生成されたVR映像をVR視聴装置に出力する映像出力手段(500)と、からなることを特徴とするVR映像空間生成システム。
    The VR video space generation system (1) that generates VR video for constructing a virtual space accessible to one or more users is
    A VR viewing device (100) consisting of one or more worn by the user,
    An image generation means (200) that generates an initial image (10) that can be displayed on the VR viewing device, and
    The position information acquisition means (300) for acquiring the position information (P) of each of the VR viewing devices, and
    A VR image generation means (400) that generates a VR image (20) by synthesizing an avatar image (V) with an initial image generated by the image generation means based on each position information acquired by the position information acquisition means. When,
    A VR video space generation system comprising: a video output means (500) for outputting a VR video generated by the VR video generation means to a VR viewing device.
  2.  前記VR映像空間生成システム(1)は、
     ユーザが移動可能な領域(F)をXYZ座標として数値的に定義する領域定義手段(310)と、
     前記位置情報取得手段によって取得された各位置情報(P)をXYZ座標として数値的に定義する位置定義手段(320)と、
     前記領域定義手段によって定義された領域(F)に、前記位置定義手段によって定義された各位置情報(P)を座標値として導入(適用)するとともに、該領域と各位置情報とを、前記映像生成手段によって生成された初期映像(10)に対応付ける対応付け手段(410)と、を備えることを特徴とする請求項1に記載のVR映像空間生成システム。
    The VR video space generation system (1) is
    Area defining means (310) that numerically defines the area (F) that can be moved by the user as XYZ coordinates, and
    The position defining means (320) that numerically defines each position information (P) acquired by the position information acquiring means as XYZ coordinates, and
    Each position information (P) defined by the position definition means is introduced (applied) as a coordinate value into the area (F) defined by the area definition means, and the area and each position information are introduced into the video. The VR video space generation system according to claim 1, further comprising an associating means (410) associated with the initial video (10) generated by the generation means.
  3.  一または複数のユーザがアクセス可能な仮想空間を構成するためのVR映像を生成するVR映像空間生成システム(2)が、
     ユーザが装着する一または複数からなるVR視聴装置(100)と、
     前記VR視聴装置に表示可能な初期映像(10)を生成する映像生成手段(200)と、
     前記映像生成手段で生成された初期映像にVR視聴装置を装着する各ユーザのアバター映像(V)を合成してVR映像(20)を生成するVR映像生成手段(400)と、
     前記VR映像生成手段により生成されたVR映像をVR視聴装置に出力する映像出力手段(500)と、からなることを特徴とするVR映像空間生成システム。
    The VR video space generation system (2) that generates VR video for constructing a virtual space accessible to one or more users is
    A VR viewing device (100) consisting of one or more worn by the user,
    An image generation means (200) that generates an initial image (10) that can be displayed on the VR viewing device, and
    A VR image generation means (400) that generates a VR image (20) by synthesizing an avatar image (V) of each user who attaches a VR viewing device to the initial image generated by the image generation means.
    A VR video space generation system comprising: a video output means (500) for outputting a VR video generated by the VR video generation means to a VR viewing device.
  4.  前記VR映像空間生成システム(2)は、
     ユーザが移動可能な領域(F)をXYZ座標として数値的に定義する領域定義手段(310)と、
     前記VR視聴装置を装着するユーザの各位置情報(P)をXYZ座標として数値的に定義する位置定義手段(320)と、
     前記領域定義手段によって定義された領域(F)に、前記位置定義手段によって定義された各位置情報(P)を座標値として導入(適用)するとともに、該領域と各位置情報とを、前記映像生成手段によって生成された初期映像(10)に対応付ける対応付け手段(410)と、を備えることを特徴とする請求項3に記載のVR映像空間生成システム。
    The VR video space generation system (2) is
    Area defining means (310) that numerically defines the area (F) that can be moved by the user as XYZ coordinates, and
    A position defining means (320) that numerically defines each position information (P) of the user who wears the VR viewing device as XYZ coordinates, and
    Each position information (P) defined by the position definition means is introduced (applied) as a coordinate value into the area (F) defined by the area definition means, and the area and each position information are introduced into the video. The VR video space generation system according to claim 3, further comprising an associating means (410) associated with the initial video (10) generated by the generation means.
  5.  前記位置定義手段(320)は、VR視聴装置(100)を装着するユーザの、身体の特定の部位の位置に係る情報を取得したうえ、身体の他の各部位の位置を演算し、
     前記VR映像生成手段(400)が、前記情報をもとに描画を行いVR視聴装置(100)に表示するVR映像(20)を生成することを特徴とする請求項1乃至請求項4の何れかに記載のVR映像空間生成システム。
    The position defining means (320) acquires information related to the position of a specific part of the body of the user who wears the VR viewing device (100), and then calculates the position of each other part of the body.
    Any of claims 1 to 4, wherein the VR image generation means (400) generates a VR image (20) that is drawn based on the information and displayed on the VR viewing device (100). The VR video space generation system described in C.
  6.  前記初期映像(10)は、平面投影映像、3D映像、またはドーム映像の何れかから選択される映像からなることを特徴とする請求項1乃至請求項5の何れかに記載のVR映像空間生成システム。 The VR image space generation according to any one of claims 1 to 5, wherein the initial image (10) is composed of an image selected from any of a plane projection image, a 3D image, and a dome image. system.
  7.  前記初期映像(10)は、ユーザの指示によって切り替わるイラスト、2D画像および文字情報を含む平面投影映像、ユーザの指示によって動作する2D動画からなる平面投影映像、ユーザの指示によって動作するCGの立体モデル、3D映像、ユーザの指示によって動作する全天球映像又はそのうちの一部の映像、のうちの何れか一または複数を含むことを特徴とする請求項1乃至請求項6の何れかに記載のVR映像空間生成システム。 The initial video (10) is an illustration that is switched according to the user's instruction, a plane projection video that includes a 2D image and text information, a plane projection video that consists of a 2D video that operates according to the user's instruction, and a CG stereoscopic model that operates according to the user's instruction. 3. The image according to any one of claims 1 to 6, wherein the image comprises any one or a plurality of a 3D image, a whole celestial sphere image operated according to a user's instruction, or a part of the image. VR video space generation system.
  8.  前記初期映像(10)は、ユーザの指示によって切り替わるイラスト、2D画像および文字情報を含む平面投影映像、ユーザの指示によって動作する2D動画からなる平面投影映像、ユーザの指示によって動作するCGの立体モデル、3D映像、ユーザの指示によって動作する全天球映像又はそのうちの一部の映像、のうちの何れか一または複数をVR視聴装置(100)の固定位置に表示させることを特徴とする請求項1乃至請求項7の何れかに記載のVR映像空間生成システム。 The initial video (10) is an illustration that is switched according to the user's instruction, a plane projection video that includes a 2D image and text information, a plane projection video that consists of a 2D video that operates according to the user's instruction, and a CG stereoscopic model that operates according to the user's instruction. 3. The claim is characterized in that any one or more of a 3D image, an all-sky image operated according to a user's instruction, or a part of the image is displayed at a fixed position of the VR viewing device (100). The VR image space generation system according to any one of 1 to 7.
  9.  前記初期映像(10)は、一般的なファイルフォーマットにて生成され、VR映像空間生成システム本体を変更せずに容易に差し替えできる、前記映像生成手段(200)が読み取り可能な独立した映像データを含む事を特徴とする請求項1乃至請求項8の何れかに記載のVR映像空間生成システム。 The initial video (10) is generated in a general file format, and can be easily replaced without changing the VR video space generation system main body, and is independent video data that can be read by the video generation means (200). The VR video space generation system according to any one of claims 1 to 8, further comprising.
  10.  前記VR視聴装置(100)は、該装置を装着したユーザの両手の指の位置および動きを検知するセンサー(110)を備え、該センサーがユーザの指の動作を検知した上で該動作情報をトラッキング情報(T)として取得するとともに、
     前記VR映像空間生成システムは、指の動作情報からなる複数のアクション情報(A)を保持しており、該アクション情報は、各々前記初期映像(10)の特定の変化処理に対応付けられており、
     前記VR映像空間生成システムは、前記トラッキング情報(T)が前記アクション情報(A)の何れかと一致した場合に、対応付けられている前記初期映像の変化処理を行うことを特徴とする請求項1乃至請求項9の何れかに記載のVR映像空間生成システム。
    The VR viewing device (100) includes a sensor (110) that detects the position and movement of the fingers of both hands of the user who wears the device, and the sensor detects the movement of the user's finger and then outputs the operation information. Acquired as tracking information (T) and
    The VR video space generation system holds a plurality of action information (A) composed of finger movement information, and each of the action information is associated with a specific change process of the initial video (10). ,
    The VR image space generation system is characterized in that when the tracking information (T) matches any of the action information (A), the associated initial image change processing is performed. The VR video space generation system according to any one of claims 9.
  11.  前記アクション情報(A)は、ユーザの両手の指を同時に動作することにより発生する指示情報からなることを特徴とする請求項10に記載のVR映像空間生成システム。 The VR video space generation system according to claim 10, wherein the action information (A) consists of instruction information generated by simultaneously operating the fingers of both hands of the user.
  12.  前記初期映像(10)は、平面投影映像、3D映像またはドーム映像の全部又は一部を含むとともに、前記アクション情報(A)は、前記初期映像(10)の進行および/または後退処理に対応付けられることを特徴とする請求項10または請求項11に記載のVR映像空間生成システム。 The initial image (10) includes all or a part of a plane projection image, a 3D image, or a dome image, and the action information (A) corresponds to the progress and / or backward processing of the initial image (10). The VR video space generation system according to claim 10 or 11.
PCT/JP2020/043282 2020-11-19 2020-11-19 Vr image space generation system WO2022107294A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022563511A JPWO2022107294A1 (en) 2020-11-19 2020-11-19
PCT/JP2020/043282 WO2022107294A1 (en) 2020-11-19 2020-11-19 Vr image space generation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/043282 WO2022107294A1 (en) 2020-11-19 2020-11-19 Vr image space generation system

Publications (1)

Publication Number Publication Date
WO2022107294A1 true WO2022107294A1 (en) 2022-05-27

Family

ID=81708626

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/043282 WO2022107294A1 (en) 2020-11-19 2020-11-19 Vr image space generation system

Country Status (2)

Country Link
JP (1) JPWO2022107294A1 (en)
WO (1) WO2022107294A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131204A1 (en) * 2022-12-23 2024-06-27 南京欧珀软件科技有限公司 Method for interaction of devices in virtual scene and related product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6203369B1 (en) * 2016-12-08 2017-09-27 株式会社コロプラ Information processing method and program for causing computer to execute information processing method
JP2017529635A (en) * 2014-06-14 2017-10-05 マジック リープ, インコーポレイテッドMagic Leap,Inc. Methods and systems for creating virtual and augmented reality
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10156908B2 (en) * 2015-04-15 2018-12-18 Sony Interactive Entertainment Inc. Pinch and hold gesture navigation on a head-mounted display
JP6470374B1 (en) * 2017-10-03 2019-02-13 株式会社コロプラ Program and information processing apparatus executed by computer to provide virtual reality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017529635A (en) * 2014-06-14 2017-10-05 マジック リープ, インコーポレイテッドMagic Leap,Inc. Methods and systems for creating virtual and augmented reality
JP6203369B1 (en) * 2016-12-08 2017-09-27 株式会社コロプラ Information processing method and program for causing computer to execute information processing method
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131204A1 (en) * 2022-12-23 2024-06-27 南京欧珀软件科技有限公司 Method for interaction of devices in virtual scene and related product

Also Published As

Publication number Publication date
JPWO2022107294A1 (en) 2022-05-27

Similar Documents

Publication Publication Date Title
Anthes et al. State of the art of virtual reality technology
JP7109408B2 (en) Wide range simultaneous remote digital presentation world
US7626569B2 (en) Movable audio/video communication interface system
Stavness et al. pCubee: a perspective-corrected handheld cubic display
Komiyama et al. JackIn space: designing a seamless transition between first and third person view for effective telepresence collaborations
JP5739922B2 (en) Virtual interactive presence system and method
JP2022549853A (en) Individual visibility in shared space
KR100809479B1 (en) Face mounted display apparatus and method for mixed reality environment
JP2023513747A (en) 3D object annotation
JP7464694B2 (en) Spatial Command and Guidance in Mixed Reality
US20160225188A1 (en) Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
US20060250392A1 (en) Three dimensional horizontal perspective workstation
CN115516364B (en) Tool bridge
JP2023514572A (en) session manager
JP2019175323A (en) Simulation system and program
Peterson Virtual Reality, Augmented Reality, and Mixed Reality Definitions
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
WO2022107294A1 (en) Vr image space generation system
Luna Introduction to virtual reality
JP2022554200A (en) non-uniform stereo rendering
CN117806457A (en) Presentation in a multi-user communication session
JP7104539B2 (en) Simulation system and program
Nesamalar et al. An introduction to virtual reality techniques and its applications
JPWO2021059360A1 (en) Animation production system
WO2021153413A1 (en) Information processing device, information processing system, and information processing method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20962457

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022563511

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20962457

Country of ref document: EP

Kind code of ref document: A1