WO2018198777A1 - Virtual reality image provision device and virtual reality image provision program - Google Patents

Virtual reality image provision device and virtual reality image provision program Download PDF

Info

Publication number
WO2018198777A1
WO2018198777A1 PCT/JP2018/015260 JP2018015260W WO2018198777A1 WO 2018198777 A1 WO2018198777 A1 WO 2018198777A1 JP 2018015260 W JP2018015260 W JP 2018015260W WO 2018198777 A1 WO2018198777 A1 WO 2018198777A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual reality
avatar
image
data
hmd
Prior art date
Application number
PCT/JP2018/015260
Other languages
French (fr)
Japanese (ja)
Inventor
拓宏 水野
譲誉 野村
Original Assignee
株式会社アルファコード
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社アルファコード filed Critical 株式会社アルファコード
Priority to JP2018563937A priority Critical patent/JP6506486B2/en
Publication of WO2018198777A1 publication Critical patent/WO2018198777A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual reality image providing apparatus and a virtual reality image providing program, and is particularly suitable for use in a virtual reality image providing apparatus configured to display a viewer's avatar image in a virtual reality space. .
  • VR virtual reality
  • Patent Document 1 discloses a configuration in which an avatar is positioned based on position information of a user's eyeball wearing an HMD when displaying an avatar image in a three-dimensional space. Specifically, it describes that a person's head is positioned and an avatar is moved based on positional information of two eyeballs, a first eyeball and a second eyeball.
  • Patent Document 2 discloses that the direction in which the user is looking in the virtual reality space can be easily determined by changing the orientation of the head of the avatar.
  • the three-dimensional output server sequentially transmits screen data to be displayed to the HMD, while receiving data relating to the rotation of the HMD in the yaw direction and the pitch direction from the HMD, and based on the received data relating to the rotation.
  • changing the user's avatar data corresponding to the HMD is described.
  • the three-dimensional output server sequentially transmits screen data to a plurality of HMDs.
  • the screen data to be transmitted includes avatar data that is data related to an avatar that is a part of each user wearing each HMD, and spatial data that is data related to a virtual three-dimensional space.
  • Each user's HMD displays a virtual three-dimensional space screen of the entire sky centered on itself, and also displays a plurality of avatars in the virtual three-dimensional space.
  • the same virtual three-dimensional space screen is displayed on any HMD, and a plurality of avatars are displayed in the same manner on any HMD.
  • the image does not reflect the real space in which a plurality of users wearing the HMD exist, and there is a problem that the display has a sense of reality or a sense of reality.
  • the present invention was made to solve such a problem, and when displaying avatar images corresponding to a plurality of viewers wearing HMDs in the real space in the virtual reality space image of the HMD, It is an object to provide a virtual reality space image with a more realistic feeling.
  • spatial data related to a virtual reality space avatar data related to an avatar that is at least another viewer among a plurality of viewers existing in the real space, a plurality of Based on the relative position data representing the actual relative positional relationship of the viewers, the user's own position is set as a predetermined position in the screen, and the actual relative positional relationship of a plurality of viewers is reflected with reference to the predetermined position in the screen.
  • the virtual reality space image in which the avatar image exists at the selected position is displayed.
  • each viewing is performed for each HMD.
  • An avatar image is displayed at a position reflecting the relative positional relationship of a plurality of viewers with the user's own position as a reference.
  • the virtual reality space image displayed on each HMD becomes the one in which the avatar image of the other viewer is displayed at the same relative position as the reality for each viewer, and provides a more realistic virtual reality space image. can do.
  • FIG. 1 is a diagram illustrating an example of the overall configuration of a VR image display system to which the virtual reality image providing apparatus according to the present embodiment is applied.
  • the VR image display system includes a plurality of HMDs 100 ⁇ 1 , 100 ⁇ 2 ,... 100 ⁇ n (hereinafter, collectively referred to as HMD 100) worn by a plurality of viewers, And a computer 200.
  • the HMD 100 corresponds to the virtual reality image providing apparatus of the present embodiment.
  • the plurality of HMDs 100 and the external computer 200 are connected by a wired or wireless communication network 300.
  • the plurality of HMDs 100 each have a built-in computer for reproducing three-dimensional spatial data related to the virtual reality space.
  • the external computer 200 transmits the avatar data regarding the avatar to be displayed in the three-dimensional space to each HMD 100.
  • Each HMD 100 generates and displays a virtual reality space image in which each viewer's avatar image is superimposed in a three-dimensional space image.
  • the above functional blocks 11 to 16 can be configured by any of hardware, DSP (Digital Signal Processor), and software.
  • each of the functional blocks 11 to 16 is actually configured to include a CPU, RAM, ROM, etc. of a computer built in the HMD 100, and is recorded in a RAM, ROM, hard disk, semiconductor memory, or the like. This is realized by operating a program stored in a medium.
  • the spatial data storage unit 101 stores spatial data related to the virtual reality space.
  • the spatial data stored in the spatial data storage unit 101 is three-dimensional spatial data of the whole sky so that the display can be changed according to the viewing direction of the viewer.
  • This three-dimensional spatial data can be generated in advance using a known technique related to virtual reality.
  • the three-dimensional spatial data is generated by, for example, a personal computer installed with a dedicated editor and stored in the spatial data storage unit 101 of the HMD 100.
  • the spatial data acquisition unit 11 acquires 3D spatial data from the spatial data storage unit 101 and supplies the acquired 3D spatial data to the image generation unit 14.
  • the acquisition of the three-dimensional space data is executed when the virtual reality space image is displayed on the HMD 100.
  • the avatar data acquisition unit 12 acquires avatar data related to an avatar, which is a clone of a viewer other than itself among a plurality of viewers existing in the real space, from the external computer 200 via the communication network 300, and acquired avatar data Is stored in the avatar data storage unit 102.
  • the avatar data acquired by the avatar data acquisition unit 12 is three-dimensional image data that can be changed to display in a state viewed from different directions by changing the viewpoint.
  • the relative position data acquisition unit 13 acquires relative position data representing the actual relative positional relationship of a plurality of viewers existing in the real space from the external computer 200 via the communication network 300, and the acquired relative position data is relative.
  • the data is stored in the position data storage unit 103.
  • the position where a plurality of viewers wearing the HMD 100 exist is arbitrary, but the positional relationship between them is not dynamically changed, but fixed. For example, a case where a seat where a plurality of viewers wearing the HMD 100 are seated is determined in advance.
  • the positions of a plurality of viewers existing in the real space are determined in advance, and relative position data representing the actual relative positional relationship of the plurality of viewers is set in advance in the external computer 200. Yes.
  • the relative position data acquisition unit 13 of each HMD 100 acquires the relative position data stored in the external computer 200.
  • FIG. 3 is a diagram for explaining an example of the relative position data.
  • FIG. 3A shows a state in which the relative positions of the other five HMDs 100 -2 to 100 -6 are expressed with reference to the position of the HMD 100 -1 located at the left end of the arc.
  • a line connecting two HMDs 100 indicates a relative positional relationship.
  • the relative position data corresponding to this line may be constituted by a vector data relative to the position of the HMD 100 -1, it may be constituted by the coordinate data.
  • FIG. 3 (a) case where the relative position data relative to the position of the HMD 100 -1, the relative position data HMD 100 -1 to get.
  • the relative position data HMD 100 -2 is obtained, in which reference to the position of the HMD 100 -2, expressed another five HMD 100 -1, the relative positional relationship between the 100 -3 100 -6 respectively .
  • the relative position data HMD 100 -3 is obtained, which on the basis of the position of the HMD 100 -3, expressed another five HMD 100 -1 to 100 -2, the relative positional relationship between the 100 -4 to 100 -6 respectively It is.
  • the relative position data acquired by the other HMDs 100 -4 to 100 -6 represent the relative positional relations with the other five HMDs 100 based on their own positions.
  • Each of the HMDs 100 -2 to 100 -6 acquires relative position data corresponding to itself from the external computer 200.
  • FIG. 3B shows another example of relative position data.
  • the relative position data represents the relative positional relationship between the HMDs 100 adjacent to each other. That is, the relative positional relationship between the HMD 100 -1 and HMD 100 -2, the relative positional relationship between the HMD 100 -2 and HMD 100 -3, relative positional relationship between the HMD 100 -3 and HMD 100 -4, the HMD 100 -4 and HMD1005 -5
  • Relative position data is configured to represent the relative position relationship and the relative position relationship between the HMD 100 -5 and the HMD 100 -6 .
  • the relative position data acquired by the six HMDs 100 -1 to 100 -6 are all the same. However, data indicating which position the user belongs to among the six HMDs 100 -1 to 100 -6 included in the relative position data is necessary. As for the data indicating its own position, each of the six HMDs 100 -1 to 100 -6 acquires the corresponding data from the external computer 200.
  • the spatial data acquisition unit 11 it is indispensable to synchronize the acquisition of the three-dimensional spatial data by the spatial data acquisition unit 11, the acquisition of the avatar data by the avatar data acquisition unit 12, and the acquisition of the relative position data by the relative position data acquisition unit 13. Absent. For example, when avatar data and relative position data are acquired from the external computer 200 in advance and stored in the avatar data storage unit 102 and the relative position data storage unit 103 and the virtual reality space image is displayed on the HMD 100, spatial data acquisition is performed. The unit 11 may acquire the three-dimensional spatial data from the spatial data storage unit 101.
  • the direction detection unit 16 detects the direction in which the head of the viewer wearing the HMD 100 is facing. That is, the HMD 100 is equipped with a gyro sensor and an acceleration sensor. The direction detection unit 16 can detect the movement of the viewer's head based on detection signals from these sensors.
  • the spatial data acquisition unit 11 reads from the spatial data storage unit 101 so that the three-dimensional space realized on the display of the HMD 100 changes dynamically according to the movement of the viewer's head detected by the direction detection unit 16. Change the 3D spatial data to be acquired.
  • the spatial data reproduction unit 14A of the image generation unit 14 reproduces the three-dimensional spatial data acquired by the spatial data acquisition unit 11 for display in accordance with the movement of the viewer's head. As a result, a three-dimensional space image in which the front three-dimensional space is expanded when the viewer faces the front is reproduced, and a three-dimensional space image in which the right three-dimensional space is expanded when the viewer is turned to the right is reproduced. When the viewer turns to the left, a three-dimensional space image is reproduced so that the left three-dimensional space is expanded.
  • the avatar image generation unit 14B generates an avatar so that the orientation of the avatar image realized on the display of the HMD 100 changes dynamically according to the movement of the viewer's head detected by the direction detection unit 16. Change the image. For example, when it is detected by the direction detection unit 16 that the viewer has turned to a direction in which another viewer is viewed in front, the avatar image generation unit 14B displays the avatar in a state where the other viewer is viewed straight. Generate an image. In addition, when the direction detection unit 16 detects that the viewer has turned to a direction in which another viewer is viewed obliquely, the avatar image generation unit 14B is in a state of viewing the other viewer from an oblique direction. Generate an avatar image.
  • FIG. 4 shows a three-dimensional space expressed by the three-dimensional space data, a relative positional relationship between each avatar (viewer) expressed by the relative position data, and the orientation of the viewer's head detected by the direction detection unit 16. It is a figure which shows the relationship with the range of the virtual reality space image produced
  • FIGS. 4A and 4B show an example when the viewer 111 -1 (the viewer wearing the HMD 100 -1 ) located at the left end of the arc is used as a reference.
  • FIG. 4C shows an example in which the viewer 111 -3 (viewer wearing the HMD 100 -3 ) located in the middle of the arc is used as a reference.
  • FIG. 5 is a diagram showing an example of a virtual reality space image generated by the image generation unit 14 corresponding to the states of FIGS. 4 (a) to 4 (c). That is, FIGS. 5A and 5B show virtual reality space images displayed on the HMD 100 -1 worn by the viewer 111 -1 located at the left end of the arc. Further, FIG. 5 (c) shows a virtual reality space image viewer 111 -3 located arc center is displayed on the HMD 100 -3 mounted. In FIG. 5, for convenience of explanation, only the avatar image is shown, and the three-dimensional space image is not shown.
  • the image generation unit 14 of the viewer 111 -1 wears HMD 100 -1 which is located at the left end of the arc, the center position of the three-dimensional space 41 which is represented by a three-dimensional spatial data It is grasped that the viewer 111-1 exists.
  • other viewers 111 -2 to 111 -6 viewers respectively mounted HMD 100 -2 to 100 -6) are real relative represented by the relative position data It is grasped that each exists at a position reflecting the positional relationship.
  • the image generation unit 14 sets the position of the viewer 111 -1 as the center position at the lower end of the screen, A virtual reality space image is generated so that the three-dimensional space in the direction indicated by the range 42 is expanded. Therefore, as shown in FIG. 5A, the avatar images related to the other viewers 111 -2 to 111 -6 do not exist in the virtual reality space image generated by the image generation unit 14.
  • the direction detection unit 16 detects that the viewer 111 -1 is facing the direction of the arrow B (the direction 90 degrees to the left of the arrow A).
  • the image generating unit 14 the position of the viewer 111 -1 to the center position of the bottom of the screen, to generate a virtual reality space image as spread three-dimensional space in the direction indicated by the range 43. Therefore, as shown in FIG. 5B, avatar images related to the other viewers 111 -2 to 111 -6 exist in the virtual reality space image generated by the image generation unit 14.
  • the avatar images of the viewers 111 -2 to 111-6 reflect the perspective at the positions that reflect the actual relative positional relationship with the center position at the bottom of the screen where the viewers 111-1 exist as a reference. Each size is displayed.
  • the virtual reality space images generated by the image generation unit 14 include avatars related to the other viewers 111 -1 to 111 -2 and 111 -5 to 111 -6. There is an image.
  • the avatar images of the viewers 111 -1 to 111 -2 and 111 -5 to 111 -6 reflect the actual relative positional relationship with reference to the center position of the lower end of the screen where the viewer 111 -3 exists. Are displayed in a size reflecting the perspective.
  • the spatial data related to the virtual reality space the avatar data related to the avatar that is a substitute for the viewer other than the self among the plurality of viewers existing in the real space, and the plurality of viewers Based on the relative position data representing the actual relative positional relationship, a position that reflects the actual relative positional relationship of a plurality of viewers with the user's own position as a predetermined position in the screen and the predetermined position in the screen as a reference
  • a virtual reality space image in which an avatar image exists is displayed.
  • the virtual reality image providing apparatus may be mounted on the external computer 200.
  • the HMD 100 includes a display control unit 15 and a direction detection unit 16.
  • the external computer 200 includes a spatial data storage unit 101, an avatar data storage unit 102 and a relative position data storage unit 103, a spatial data acquisition unit 11, an avatar data acquisition unit 12, a relative position data acquisition unit 13 and an image generation unit 14. Prepare.
  • the avatar data acquisition part 12 acquires avatar data of viewers other than himself among the some viewers which exist in real space
  • this invention is not limited to this.
  • the avatar data acquisition unit 12 acquires the avatar data of all viewers existing in the real space including itself, and the image generation unit 14 generates the virtual reality space image including the own avatar image. May be.
  • the avatar images are reflected by reflecting the direction of the head that the actual viewers are facing. May be generated.
  • the viewer 111 -6 who is at the right end of the arc, it has been detected by the direction detecting section 16 facing to the right there are the viewer 111 -1, viewers the avatar image corresponding to the viewer 111 -6 in the virtual reality space image to be displayed on the 111 -1 HMD 100 -1, and image facing the viewer 111 -1.
  • each HMD 100 notifies the external computer 200 of the direction of the viewer's head detected by the direction detection unit 16 via the communication network 300.
  • the external computer 200 notifies the direction of the viewer's head notified from each HMD 100 to the HMD 100 other than the notification source via the communication network 300.
  • the image generation unit 14 generates an avatar image of the other viewer in consideration of the head direction of the other viewer notified from the external computer 200.
  • FIG. 6 is a view showing a modification of the VR image display system to which the virtual reality image providing apparatus according to the present embodiment is applied.
  • FIG. 6 shows a configuration related to audio output in addition to the display of the virtual reality space image.
  • the VR image display system shown in FIG. 6 includes an external speaker 400 shared by a plurality of viewers and a speaker mounted on the HMD 100 (hereinafter referred to as a mounted speaker).
  • the on-board speaker included in the HMD 100 may be a headphone-type speaker configured to be positioned near both ears when a viewer wears the HMD 100, or may be configured to be positioned other than near both ears. It may be a small speaker.
  • the HMD 100 may include a headset having a microphone in addition to a headphone type speaker.
  • the external computer 200 reproduces sound in synchronization with the display of the virtual reality space image and outputs it from the external speaker 400.
  • the individual HMDs 100 worn by a plurality of viewers also reproduce sound in synchronization with the display of the virtual reality space image and output from the mounted speaker.
  • the main sound is output from the external speaker 400 and the auxiliary sound is output from the speaker mounted on the HMD 100.
  • a high volume main sound is output from the external speaker 400 while a low volume sub sound is output from the speaker mounted on the HMD 100.
  • the audio data related to the sub-audio output from the speaker mounted on the HMD 100 may be stored in advance in the spatial data storage unit 101 of the HMD 100, or the HMD 100 may acquire it from the external computer 200 during reproduction.
  • the audio data related to the secondary audio output from the speaker mounted on the HMD 100 may be data obtained by transmitting the speaker's speaker audio input from a microphone of a certain HMD 100 to another HMD 100 via the external computer 200. . In this way, the viewer can hear the main sound output from the external speaker 400 and can hear the speaker sound of other viewers as auxiliary sound from the speaker mounted on the HMD 100.
  • the transmission source of the voice data is from the mounted speaker.
  • the speaker voice may be output from either the left speaker or the right speaker depending on whether it is positioned relatively to the left side or the right side when viewed from another HMD 100 that outputs.
  • the HMD 100 -1 worn by the viewer 111 -1 located at the left end of the arc from the speaker mounted on the HMD 100 -3 worn by the viewer 111 -3 at the center of the arc shown in FIG.
  • the speaker voice of the viewer 111 -1 is output based on the received voice data
  • the HMD 100 -1 of the viewer 111 -1 is positioned on the right side as viewed from the central HMD 100 -3.
  • the speaker voice is output only from
  • the volume of the speaker voice to be output may be adjusted according to the relative distance between the HMD 100 that is the transmission source of the audio data and the HMD 100 that is the transmission destination. In this way, since the speaker voice can be heard from the direction where the speaker is actually present at a volume corresponding to the actual relative distance, the sense of reality can be increased.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The purpose of the present invention is to provide more realistic images of virtual reality space. The purpose is achieved by the following. An image of a virtual reality space is displayed on the basis of three-dimensional spatial data relating to a virtual reality space, avatar data relating to avatars which serve as virtual selves for a plurality of participants existing in the real world, and relative position data representing the real-world positions of the plurality of participants in relation to one another. A given participant's own position is defined as a prescribed in-display position. By displaying, with the prescribed in-display position as a reference, an image of the virtual reality space wherein avatar images are present in positions reflective of the real-world relative positions of the plurality of participants, the image of the virtual reality space displayed on each HMD is such that each participant perceives avatar images of other participants to occupy the same relative positions as said participants do in the real world.

Description

仮想現実画像提供装置および仮想現実画像提供用プログラムVirtual reality image providing apparatus and virtual reality image providing program
 本発明は、仮想現実画像提供装置および仮想現実画像提供用プログラムに関し、特に、仮想現実空間内に視聴者のアバター画像を表示させるようになされた仮想現実画像提供装置に用いて好適なものである。 The present invention relates to a virtual reality image providing apparatus and a virtual reality image providing program, and is particularly suitable for use in a virtual reality image providing apparatus configured to display a viewer's avatar image in a virtual reality space. .
 近年、コンピュータの中に作られた仮想的な世界をあたかも現実のように体験させる仮想現実(VR:バーチャルリアリティ)技術の活用が広がりを見せつつある。VRの応用例は様々であるが、ユーザが装着したゴーグルのようなHMD(ヘッドマウントディスプレイ)において、3次元空間の画像データを表示させることにより、HMDに対して描き出された3次元空間の中でユーザが仮想的に様々な体験をすることができるようになされたものが存在する。この種のVR技術では、ユーザの視線や頭の動きに合わせて全天周の画像を見られるようにしたものが多く提供されている。 In recent years, the use of virtual reality (VR) technology that makes a virtual world created in a computer experience as if it were real is spreading. Although there are various application examples of VR, in an HMD (head mounted display) such as goggles worn by the user, by displaying image data of the three-dimensional space, the inside of the three-dimensional space drawn for the HMD is displayed. There are those that allow the user to virtually experience various experiences. In this type of VR technology, many devices that allow the user to view an image of the entire sky according to the user's line of sight and the movement of the head are provided.
 また、視聴者または当該視聴者がコンピュータと対話する際の仮想人物を、アバター画像として3次元空間の中に表示させるようにしたシステムも存在する(例えば、特許文献1,2参照)。特許文献1には、3次元空間内におけるアバター画像の表示において、HMDを装着するユーザの眼球の位置情報に基づいて、アバターの位置決めを行う構成が開示されている。具体的には、第1の眼球と第2の眼球との2つの眼球の位置情報に基づいて、人の頭部を位置決めし、アバターを移動させることが記載されている。 There is also a system that displays a viewer or a virtual person when the viewer interacts with a computer as an avatar image in a three-dimensional space (see, for example, Patent Documents 1 and 2). Patent Document 1 discloses a configuration in which an avatar is positioned based on position information of a user's eyeball wearing an HMD when displaying an avatar image in a three-dimensional space. Specifically, it describes that a person's head is positioned and an avatar is moved based on positional information of two eyeballs, a first eyeball and a second eyeball.
 また、特許文献2には、アバターの頭部の向きを変更することにより、仮想現実空間においてユーザが見ている方向を容易に判断できるようにすることが開示されている。具体的には、3次元出力サーバが、表示する画面データを逐次HMDに送信する一方、HMDのヨー方向およびピッチ方向の回転に関するデータをHMDから受信し、受信した回転に関するデータに基づいて、画面データを変更するとともに、HMDに対応するユーザのアバターデータを変更することが記載されている。 Also, Patent Document 2 discloses that the direction in which the user is looking in the virtual reality space can be easily determined by changing the orientation of the head of the avatar. Specifically, the three-dimensional output server sequentially transmits screen data to be displayed to the HMD, while receiving data relating to the rotation of the HMD in the yaw direction and the pitch direction from the HMD, and based on the received data relating to the rotation. In addition to changing the data, changing the user's avatar data corresponding to the HMD is described.
特開2014-86091号公報JP 2014-86091 A 特開2017-27477号公報JP 2017-27477 A
 上記特許文献2に記載の技術では、3次元出力サーバは、複数のHMDに画面データを逐次送信する。送信する画面データには、各HMDを装着する各ユーザの分身であるアバターに関するデータであるアバターデータと、仮想3次元空間に関するデータである空間データとが含まれる。各ユーザのHMDは、それぞれ自身を中心として全天周の仮想3次元空間画面を表示するとともに、当該仮想3次元空間内に複数のアバターを表示する。 In the technique described in Patent Document 2, the three-dimensional output server sequentially transmits screen data to a plurality of HMDs. The screen data to be transmitted includes avatar data that is data related to an avatar that is a part of each user wearing each HMD, and spatial data that is data related to a virtual three-dimensional space. Each user's HMD displays a virtual three-dimensional space screen of the entire sky centered on itself, and also displays a plurality of avatars in the virtual three-dimensional space.
 特許文献2に記載の技術では、どのHMDにも同じ仮想3次元空間画面が表示され、どのHMDでも複数のアバターが同じように表示される。しかしながら、これでは、HMDを装着した複数のユーザが存在する現実空間を反映させた画像にはなっておらず、現実感あるは臨場感に乏しい表示になってしまうという問題があった。 In the technique described in Patent Document 2, the same virtual three-dimensional space screen is displayed on any HMD, and a plurality of avatars are displayed in the same manner on any HMD. However, in this case, the image does not reflect the real space in which a plurality of users wearing the HMD exist, and there is a problem that the display has a sense of reality or a sense of reality.
 本発明は、このような問題を解決するために成されたものであり、現実空間においてHMDを装着した複数の視聴者に対応するアバター画像をHMDの仮想現実空間画像内に表示させる場合に、より現実感のある仮想現実空間画像を提供できるようにすることを目的とする。 The present invention was made to solve such a problem, and when displaying avatar images corresponding to a plurality of viewers wearing HMDs in the real space in the virtual reality space image of the HMD, It is an object to provide a virtual reality space image with a more realistic feeling.
 上記した課題を解決するために、本発明では、仮想現実空間に関する空間データと、現実空間に存在する複数の視聴者のうち少なくとも自分以外の視聴者の分身であるアバターに関するアバターデータと、複数の視聴者の現実の相対位置関係を表す相対位置データとに基づいて、自分の位置を画面内所定位置とし、当該画面内所定位置を基準として、複数の視聴者の現実の相対位置関係を反映させた位置にアバター画像が存在する仮想現実空間画像を表示させるようにしている。 In order to solve the above-described problems, in the present invention, spatial data related to a virtual reality space, avatar data related to an avatar that is at least another viewer among a plurality of viewers existing in the real space, a plurality of Based on the relative position data representing the actual relative positional relationship of the viewers, the user's own position is set as a predetermined position in the screen, and the actual relative positional relationship of a plurality of viewers is reflected with reference to the predetermined position in the screen. The virtual reality space image in which the avatar image exists at the selected position is displayed.
 上記のように構成した本発明によれば、現実空間においてHMDを装着した複数の視聴者に対応するアバター画像をHMDの仮想現実空間画像内に表示させる場合に、それぞれのHMD毎に、各視聴者の自分の位置を基準として、複数の視聴者の相対位置関係を反映させた位置にアバター画像が表示される。これにより、各HMDに表示される仮想現実空間画像が、各視聴者にとって現実と同じ相対位置に他の視聴者のアバター画像が表示されたものとなり、より現実感のある仮想現実空間画像を提供することができる。 According to the present invention configured as described above, when displaying avatar images corresponding to a plurality of viewers wearing HMDs in the real space in the virtual reality space image of the HMD, each viewing is performed for each HMD. An avatar image is displayed at a position reflecting the relative positional relationship of a plurality of viewers with the user's own position as a reference. As a result, the virtual reality space image displayed on each HMD becomes the one in which the avatar image of the other viewer is displayed at the same relative position as the reality for each viewer, and provides a more realistic virtual reality space image. can do.
本実施形態による仮想現実画像提供装置を適用したVR画像表示システムの全体構成例を示す図である。It is a figure which shows the example of whole structure of the VR image display system to which the virtual reality image provision apparatus by this embodiment is applied. 本実施形態によるHMD(仮想現実画像提供装置)の機能構成例を示すブロック図である。It is a block diagram which shows the function structural example of HMD (virtual reality image provision apparatus) by this embodiment. 本実施形態による相対位置データの一例を説明するための図である。It is a figure for demonstrating an example of the relative position data by this embodiment. 3次元空間と各アバターの相対位置関係と仮想現実空間画像の範囲との関係を示す図である。It is a figure which shows the relationship between the relative positional relationship of a three-dimensional space and each avatar, and the range of a virtual reality space image. 本実施形態の画像生成部により生成される仮想現実空間画像の例を示す図である。It is a figure which shows the example of the virtual reality space image produced | generated by the image generation part of this embodiment. 本実施形態による仮想現実画像提供装置を適用したVR画像表示システムの変形例を示す図である。It is a figure which shows the modification of the VR image display system to which the virtual reality image provision apparatus by this embodiment is applied.
 以下、本発明の一実施形態を図面に基づいて説明する。図1は、本実施形態による仮想現実画像提供装置を適用したVR画像表示システムの全体構成例を示す図である。 Hereinafter, an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a diagram illustrating an example of the overall configuration of a VR image display system to which the virtual reality image providing apparatus according to the present embodiment is applied.
 本実施形態のVR画像表示システムは、複数の視聴者が装着する複数のHMD100-1,100-2,・・・100-n(以下、これらをまとめてHMD100と記すこともある)と、外部コンピュータ200とを備えて構成されている。HMD100は、本実施形態の仮想現実画像提供装置に相当する。複数のHMD100と外部コンピュータ200との間は、有線または無線の通信ネットワーク300により接続されている。 The VR image display system according to the present embodiment includes a plurality of HMDs 100 −1 , 100 −2 ,... 100 −n (hereinafter, collectively referred to as HMD 100) worn by a plurality of viewers, And a computer 200. The HMD 100 corresponds to the virtual reality image providing apparatus of the present embodiment. The plurality of HMDs 100 and the external computer 200 are connected by a wired or wireless communication network 300.
 複数のHMD100は、仮想現実空間に関する3次元空間データの再生を行うためのコンピュータをそれぞれが内蔵している。外部コンピュータ200は、3次元空間内に表示させるアバターに関するアバターデータを各HMD100に送信する。各HMD100では、3次元空間画像内に各視聴者のアバター画像を重畳させた仮想現実空間画像を生成して表示する。 The plurality of HMDs 100 each have a built-in computer for reproducing three-dimensional spatial data related to the virtual reality space. The external computer 200 transmits the avatar data regarding the avatar to be displayed in the three-dimensional space to each HMD 100. Each HMD 100 generates and displays a virtual reality space image in which each viewer's avatar image is superimposed in a three-dimensional space image.
 図2は、本実施形態によるHMD100の機能構成例を示すブロック図である。図2に示すように、本実施形態のHMD100は、その機能構成として、空間データ取得部11、アバターデータ取得部12、相対位置データ取得部13、画像生成部14、表示制御部15および方向検出部16を備えて構成されている。画像生成部14は、空間データ再生部14Aおよびアバター画像生成部14Bを備えている。また、本実施形態のHMD100は、記憶媒体として、空間データ記憶部101、アバターデータ記憶部102および相対位置データ記憶部103を備えている。 FIG. 2 is a block diagram illustrating a functional configuration example of the HMD 100 according to the present embodiment. As shown in FIG. 2, the HMD 100 of the present embodiment has, as its functional configuration, a spatial data acquisition unit 11, an avatar data acquisition unit 12, a relative position data acquisition unit 13, an image generation unit 14, a display control unit 15, and direction detection. A portion 16 is provided. The image generation unit 14 includes a spatial data reproduction unit 14A and an avatar image generation unit 14B. In addition, the HMD 100 of this embodiment includes a spatial data storage unit 101, an avatar data storage unit 102, and a relative position data storage unit 103 as storage media.
 上記各機能ブロック11~16は、ハードウェア、DSP(Digital Signal Processor)、ソフトウェアの何れによっても構成することが可能である。例えばソフトウェアによって構成する場合、上記各機能ブロック11~16は、実際にはHMD100が内蔵しているコンピュータのCPU、RAM、ROMなどを備えて構成され、RAMやROM、ハードディスクまたは半導体メモリ等の記録媒体に記憶されたプログラムが動作することによって実現される。 The above functional blocks 11 to 16 can be configured by any of hardware, DSP (Digital Signal Processor), and software. For example, when configured by software, each of the functional blocks 11 to 16 is actually configured to include a CPU, RAM, ROM, etc. of a computer built in the HMD 100, and is recorded in a RAM, ROM, hard disk, semiconductor memory, or the like. This is realized by operating a program stored in a medium.
 空間データ記憶部101は、仮想現実空間に関する空間データを記憶する。空間データ記憶部101が記憶する空間データは、視聴者が見る方向に応じて表示を変えられるようになされた全天周の3次元空間データである。この3次元空間データは、仮想現実に関する公知の技術を用いて、あらかじめ生成することが可能である。この3次元空間データは、例えば、専用のエディタがインストールされたパーソナルコンピュータ等で生成され、HMD100の空間データ記憶部101に記憶される。 The spatial data storage unit 101 stores spatial data related to the virtual reality space. The spatial data stored in the spatial data storage unit 101 is three-dimensional spatial data of the whole sky so that the display can be changed according to the viewing direction of the viewer. This three-dimensional spatial data can be generated in advance using a known technique related to virtual reality. The three-dimensional spatial data is generated by, for example, a personal computer installed with a dedicated editor and stored in the spatial data storage unit 101 of the HMD 100.
 空間データ取得部11は、空間データ記憶部101から3次元空間データを取得し、取得した3次元空間データを画像生成部14に供給する。3次元空間データの取得は、仮想現実空間画像をHMD100に表示させる際に実行される。 The spatial data acquisition unit 11 acquires 3D spatial data from the spatial data storage unit 101 and supplies the acquired 3D spatial data to the image generation unit 14. The acquisition of the three-dimensional space data is executed when the virtual reality space image is displayed on the HMD 100.
 アバターデータ取得部12は、現実空間に存在する複数の視聴者のうち自分以外の視聴者の分身であるアバターに関するアバターデータを、外部コンピュータ200から通信ネットワーク300を介して取得し、取得したアバターデータをアバターデータ記憶部102に記憶させる。アバターデータ取得部12が取得するアバターデータは、視点を変えて異なる方向から見た状態の表示に変えられるようになされた3次元画像データである。 The avatar data acquisition unit 12 acquires avatar data related to an avatar, which is a clone of a viewer other than itself among a plurality of viewers existing in the real space, from the external computer 200 via the communication network 300, and acquired avatar data Is stored in the avatar data storage unit 102. The avatar data acquired by the avatar data acquisition unit 12 is three-dimensional image data that can be changed to display in a state viewed from different directions by changing the viewpoint.
 本実施形態において、外部コンピュータ200に接続するHMD100の数は任意である。外部コンピュータ200は、接続したHMD100の数と同数のアバターデータを記憶している。アバターデータ取得部12は、外部コンピュータ200に記憶されている複数のアバターデータのうち、自分自身以外の視聴者に対応するアバターデータを取得する。例えば、HMD100-1のアバターデータ取得部12は、当該HMD100-1を装着する視聴者以外のアバターデータを外部コンピュータ200から取得する。 In the present embodiment, the number of HMDs 100 connected to the external computer 200 is arbitrary. The external computer 200 stores the same number of avatar data as the number of connected HMDs 100. The avatar data acquisition unit 12 acquires avatar data corresponding to viewers other than itself among a plurality of avatar data stored in the external computer 200. For example, the avatar data acquisition unit 12 of the HMD 100 -1 acquires avatar data other than viewers wearing the HMD 100 -1 from the external computer 200.
 相対位置データ取得部13は、現実空間に存在する複数の視聴者の現実の相対位置関係を表す相対位置データを、外部コンピュータ200から通信ネットワーク300を介して取得し、取得した相対位置データを相対位置データ記憶部103に記憶させる。本実施形態において、HMD100を装着した複数の視聴者が存在する位置は任意であるが、互いの位置関係が動的に変わるものではなく、固定であるものとする。例えば、HMD100を装着した複数の視聴者が座る座席があらかじめ決められているといったケースが典型例である。 The relative position data acquisition unit 13 acquires relative position data representing the actual relative positional relationship of a plurality of viewers existing in the real space from the external computer 200 via the communication network 300, and the acquired relative position data is relative. The data is stored in the position data storage unit 103. In the present embodiment, the position where a plurality of viewers wearing the HMD 100 exist is arbitrary, but the positional relationship between them is not dynamically changed, but fixed. For example, a case where a seat where a plurality of viewers wearing the HMD 100 are seated is determined in advance.
 すなわち、本実施形態では、現実空間に存在する複数の視聴者の位置はあらかじめ決められており、複数の視聴者の現実の相対位置関係を表す相対位置データが、外部コンピュータ200においてあらかじめ設定されている。各HMD100の相対位置データ取得部13は、外部コンピュータ200に記憶されている相対位置データを取得する。 That is, in this embodiment, the positions of a plurality of viewers existing in the real space are determined in advance, and relative position data representing the actual relative positional relationship of the plurality of viewers is set in advance in the external computer 200. Yes. The relative position data acquisition unit 13 of each HMD 100 acquires the relative position data stored in the external computer 200.
 図3は、相対位置データの一例を説明するための図である。ここでは、6個のHMD100-1~100-6をそれぞれ装着した6人の視聴者が円弧状に並んで存在する場合の例を示している。図3(a)は、円弧の左端に位置するHMD100-1の位置を基準として、他の5個のHMD100-2~100-6との相対位置関係をそれぞれ表した状態を示している。図3(a)において、2つのHMD100の間を結ぶ線が相対位置関係を示している。この線に対応する相対位置データは、HMD100-1の位置を基準としたベクトルデータで構成してもよいし、座標データで構成してもよい。図3(a)のように、HMD100-1の位置を基準とする相対位置データを構成した場合、この相対位置データはHMD100-1が取得する。 FIG. 3 is a diagram for explaining an example of the relative position data. Here, an example is shown in which six viewers each wearing six HMDs 100 -1 to 100 -6 are arranged in an arc. FIG. 3A shows a state in which the relative positions of the other five HMDs 100 -2 to 100 -6 are expressed with reference to the position of the HMD 100 -1 located at the left end of the arc. In FIG. 3A, a line connecting two HMDs 100 indicates a relative positional relationship. The relative position data corresponding to this line may be constituted by a vector data relative to the position of the HMD 100 -1, it may be constituted by the coordinate data. As shown in FIG. 3 (a), case where the relative position data relative to the position of the HMD 100 -1, the relative position data HMD 100 -1 to get.
 また、HMD100-2が取得する相対位置データは、HMD100-2の位置を基準として、他の5個のHMD100-1,100-3~100-6との相対位置関係をそれぞれ表したものである。HMD100-3が取得する相対位置データは、HMD100-3の位置を基準として、他の5個のHMD100-1~100-2,100-4~100-6との相対位置関係をそれぞれ表したものである。他のHMD100-4~100-6が取得する相対位置データも同様に、自身の位置を基準として、他の5個のHMD100との相対位置関係をそれぞれ表したものである。各HMD100-2~100-6は、それぞれ自分に対応する相対位置データを外部コンピュータ200から取得する。 The relative position data HMD 100 -2 is obtained, in which reference to the position of the HMD 100 -2, expressed another five HMD 100 -1, the relative positional relationship between the 100 -3 100 -6 respectively . The relative position data HMD 100 -3 is obtained, which on the basis of the position of the HMD 100 -3, expressed another five HMD 100 -1 to 100 -2, the relative positional relationship between the 100 -4 to 100 -6 respectively It is. Similarly, the relative position data acquired by the other HMDs 100 -4 to 100 -6 represent the relative positional relations with the other five HMDs 100 based on their own positions. Each of the HMDs 100 -2 to 100 -6 acquires relative position data corresponding to itself from the external computer 200.
 図3(b)は、相対位置データの別の例を示したものである。図3(b)の例では、互いに隣り合うHMD100どうしの相対位置関係を表したものを、相対位置データとして構成する。すなわち、HMD100-1とHMD100-2との相対位置関係、HMD100-2とHMD100-3との相対位置関係、HMD100-3とHMD100-4との相対位置関係、HMD100-4とHMD1005-5との相対位置関係、HMD100-5とHMD100-6との相対位置関係を表すものとして、相対位置データが構成される。 FIG. 3B shows another example of relative position data. In the example of FIG. 3B, the relative position data represents the relative positional relationship between the HMDs 100 adjacent to each other. That is, the relative positional relationship between the HMD 100 -1 and HMD 100 -2, the relative positional relationship between the HMD 100 -2 and HMD 100 -3, relative positional relationship between the HMD 100 -3 and HMD 100 -4, the HMD 100 -4 and HMD1005 -5 Relative position data is configured to represent the relative position relationship and the relative position relationship between the HMD 100 -5 and the HMD 100 -6 .
 相対位置データを図3(b)のように構成した場合、6個のHMD100-1~100-6が取得する相対位置データは何れも同じとなる。ただし、その相対位置データの中に含まれる6個のHMD100-1~100-6うち、自分がどの位置に該当するかを示すデータが必要である。この自分自身の位置を示すデータについては、6個のHMD100-1~100-6がそれぞれ該当のものを外部コンピュータ200から取得する。 When the relative position data is configured as shown in FIG. 3B, the relative position data acquired by the six HMDs 100 -1 to 100 -6 are all the same. However, data indicating which position the user belongs to among the six HMDs 100 -1 to 100 -6 included in the relative position data is necessary. As for the data indicating its own position, each of the six HMDs 100 -1 to 100 -6 acquires the corresponding data from the external computer 200.
 なお、空間データ取得部11による3次元空間データの取得と、アバターデータ取得部12によるアバターデータの取得と、相対位置データ取得部13による相対位置データの取得とを同期して行うことは必須ではない。例えば、アバターデータおよび相対位置データを外部コンピュータ200からあらかじめ取得してアバターデータ記憶部102および相対位置データ記憶部103に記憶しておき、HMD100に仮想現実空間画像を表示させる際に、空間データ取得部11により3次元空間データを空間データ記憶部101から取得するようにしてよい。 In addition, it is indispensable to synchronize the acquisition of the three-dimensional spatial data by the spatial data acquisition unit 11, the acquisition of the avatar data by the avatar data acquisition unit 12, and the acquisition of the relative position data by the relative position data acquisition unit 13. Absent. For example, when avatar data and relative position data are acquired from the external computer 200 in advance and stored in the avatar data storage unit 102 and the relative position data storage unit 103 and the virtual reality space image is displayed on the HMD 100, spatial data acquisition is performed. The unit 11 may acquire the three-dimensional spatial data from the spatial data storage unit 101.
 画像生成部14は、空間データ取得部11により取得された3次元空間データと、アバターデータ取得部12により取得されたアバターデータと、相対位置データ取得部13により取得された相対位置データとに基づいて、自分の位置を画面内所定位置とし、当該画面内所定位置を基準として、複数の視聴者の現実の相対位置関係を反映させた位置にアバター画像が存在する仮想現実空間画像を生成する。表示制御部15は、画像生成部14により生成された仮想現実空間画像を表示させる。 The image generation unit 14 is based on the three-dimensional spatial data acquired by the spatial data acquisition unit 11, the avatar data acquired by the avatar data acquisition unit 12, and the relative position data acquired by the relative position data acquisition unit 13. Then, a virtual reality space image in which avatar images are present at positions reflecting the actual relative positional relationship of a plurality of viewers is generated with the user's position as a predetermined position in the screen and the predetermined position in the screen as a reference. The display control unit 15 displays the virtual reality space image generated by the image generation unit 14.
 方向検出部16は、HMD100を装着している視聴者の頭が向いている方向を検出する。すなわち、HMD100には、ジャイロセンサや加速度センサが搭載されている。方向検出部16は、これらのセンサからの検出信号に基づいて、視聴者の頭の動きを検出することが可能となっている。 The direction detection unit 16 detects the direction in which the head of the viewer wearing the HMD 100 is facing. That is, the HMD 100 is equipped with a gyro sensor and an acceleration sensor. The direction detection unit 16 can detect the movement of the viewer's head based on detection signals from these sensors.
 空間データ取得部11は、方向検出部16により検出された視聴者の頭の動きに応じて、HMD100の表示上に実現される3次元空間が動的に変わるように、空間データ記憶部101から取得する3次元空間データを変更する。画像生成部14の空間データ再生部14Aは、視聴者の頭の動きに合わせて空間データ取得部11により取得された3次元空間データを表示用に再生する。これにより、視聴者が正面を向けば正面の3次元空間が広がるような3次元空間画像が再生され、視聴者が右を向けば右側の3次元空間が広がるような3次元空間画像が再生され、視聴者が左を向けば左側の3次元空間が広がるような3次元空間画像が再生される。 The spatial data acquisition unit 11 reads from the spatial data storage unit 101 so that the three-dimensional space realized on the display of the HMD 100 changes dynamically according to the movement of the viewer's head detected by the direction detection unit 16. Change the 3D spatial data to be acquired. The spatial data reproduction unit 14A of the image generation unit 14 reproduces the three-dimensional spatial data acquired by the spatial data acquisition unit 11 for display in accordance with the movement of the viewer's head. As a result, a three-dimensional space image in which the front three-dimensional space is expanded when the viewer faces the front is reproduced, and a three-dimensional space image in which the right three-dimensional space is expanded when the viewer is turned to the right is reproduced. When the viewer turns to the left, a three-dimensional space image is reproduced so that the left three-dimensional space is expanded.
 アバター画像生成部14Bは、アバターデータ取得部12により取得されたアバターデータに基づいて、自分以外の視聴者に関するアバター画像を生成する。このとき、アバター画像生成部14Bは、上述したように、自分の位置を画面内所定位置(例えば、画面下端の中央位置)とし、当該画面下端の中央位置を基準として、複数の視聴者の現実の相対位置関係を反映させた位置にそれぞれが存在するように複数のアバター画像を生成する。また、アバター画像生成部14Bは、複数の視聴者の現実の相対位置関係に応じた遠近感を反映させた大きさで複数のアバター画像を生成する。 The avatar image generation unit 14B generates an avatar image related to viewers other than itself based on the avatar data acquired by the avatar data acquisition unit 12. At this time, as described above, the avatar image generation unit 14B sets its own position as a predetermined position in the screen (for example, the center position at the lower end of the screen), and the reality of a plurality of viewers with the center position at the lower end of the screen as a reference. A plurality of avatar images are generated so that each exists at a position reflecting the relative positional relationship between In addition, the avatar image generation unit 14B generates a plurality of avatar images with a size reflecting the perspective according to the actual relative positional relationship of the plurality of viewers.
 また、アバター画像生成部14Bは、方向検出部16により検出された視聴者の頭の動きに応じて、HMD100の表示上に実現されるアバター画像の向きが動的に変わるように、生成するアバター画像を変更する。例えば、視聴者が他の視聴者を正面に見るような方向を向いたことが方向検出部16により検出された場合、アバター画像生成部14Bは、当該他の視聴者をまっすぐ見た状態のアバター画像を生成する。また、視聴者が他の視聴者を斜めに見るような方向を向いたことが方向検出部16により検出された場合、アバター画像生成部14Bは、当該他の視聴者を斜めから見た状態のアバター画像を生成する。 In addition, the avatar image generation unit 14B generates an avatar so that the orientation of the avatar image realized on the display of the HMD 100 changes dynamically according to the movement of the viewer's head detected by the direction detection unit 16. Change the image. For example, when it is detected by the direction detection unit 16 that the viewer has turned to a direction in which another viewer is viewed in front, the avatar image generation unit 14B displays the avatar in a state where the other viewer is viewed straight. Generate an image. In addition, when the direction detection unit 16 detects that the viewer has turned to a direction in which another viewer is viewed obliquely, the avatar image generation unit 14B is in a state of viewing the other viewer from an oblique direction. Generate an avatar image.
 図4は、3次元空間データによって表現される3次元空間と、相対位置データによって表現される各アバター(視聴者)の相対位置関係と、方向検出部16により検出される視聴者の頭の向き(視線方向)に応じて画像生成部14により生成される仮想現実空間画像の範囲との関係を示す図である。図4(a)および(b)は、円弧の左端に位置する視聴者111-1(HMD100-1を装着した視聴者)を基準とした場合の例を示すものである。また、図4(c)は、円弧の真中に位置する視聴者111-3(HMD100-3を装着した視聴者)を基準とした場合の例を示すものである。 FIG. 4 shows a three-dimensional space expressed by the three-dimensional space data, a relative positional relationship between each avatar (viewer) expressed by the relative position data, and the orientation of the viewer's head detected by the direction detection unit 16. It is a figure which shows the relationship with the range of the virtual reality space image produced | generated by the image generation part 14 according to (line-of-sight direction). FIGS. 4A and 4B show an example when the viewer 111 -1 (the viewer wearing the HMD 100 -1 ) located at the left end of the arc is used as a reference. FIG. 4C shows an example in which the viewer 111 -3 (viewer wearing the HMD 100 -3 ) located in the middle of the arc is used as a reference.
 また、図5は、図4(a)~(c)の状態に対応して、画像生成部14により生成される仮想現実空間画像の例を示す図である。すなわち、図5(a)および(b)は、円弧の左端に位置する視聴者111-1が装着したHMD100-1において表示される仮想現実空間画像を示すものである。また、図5(c)は、円弧の真中に位置する視聴者111-3が装着したHMD100-3において表示される仮想現実空間画像を示すものである。なお、図5では説明の便宜上、アバター画像のみを示し、3次元空間画像は図示を省略している。 FIG. 5 is a diagram showing an example of a virtual reality space image generated by the image generation unit 14 corresponding to the states of FIGS. 4 (a) to 4 (c). That is, FIGS. 5A and 5B show virtual reality space images displayed on the HMD 100 -1 worn by the viewer 111 -1 located at the left end of the arc. Further, FIG. 5 (c) shows a virtual reality space image viewer 111 -3 located arc center is displayed on the HMD 100 -3 mounted. In FIG. 5, for convenience of explanation, only the avatar image is shown, and the three-dimensional space image is not shown.
 図4(a)に示すように、円弧の左端に位置する視聴者111-1が装着するHMD100-1の画像生成部14は、3次元空間データによって表現される3次元空間41の中心位置に視聴者111-1が存在するものとして把握する。また、視聴者111-1の位置を基準として、他の視聴者111-2~111-6(それぞれHMD100-2~100-6を装着した視聴者)が、相対位置データで示される現実の相対位置関係を反映させた位置にそれぞれ存在するものとして把握する。 As shown in FIG. 4 (a), the image generation unit 14 of the viewer 111 -1 wears HMD 100 -1 which is located at the left end of the arc, the center position of the three-dimensional space 41 which is represented by a three-dimensional spatial data It is grasped that the viewer 111-1 exists. With reference to the position of the viewer 111 -1, other viewers 111 -2 to 111 -6 (viewers respectively mounted HMD 100 -2 to 100 -6) are real relative represented by the relative position data It is grasped that each exists at a position reflecting the positional relationship.
 ここで、視聴者111-1が矢印Aの方向を向いていることが方向検出部16により検出された場合、画像生成部14は、視聴者111-1の位置を画面下端の中央位置とし、範囲42で示される方向の3次元空間が広がるような仮想現実空間画像を生成する。そのため、図5(a)に示すように、画像生成部14により生成される仮想現実空間画像の中に、他の視聴者111-2~111-6に関するアバター画像は存在しない。 Here, when it is detected by the direction detection unit 16 that the viewer 111 -1 is facing the direction of the arrow A, the image generation unit 14 sets the position of the viewer 111 -1 as the center position at the lower end of the screen, A virtual reality space image is generated so that the three-dimensional space in the direction indicated by the range 42 is expanded. Therefore, as shown in FIG. 5A, the avatar images related to the other viewers 111 -2 to 111 -6 do not exist in the virtual reality space image generated by the image generation unit 14.
 これに対し、図4(b)に示すように、視聴者111-1が矢印Bの方向(矢印Aに対して90度左側の方向)を向いていることが方向検出部16により検出された場合、画像生成部14は、視聴者111-1の位置を画面下端の中央位置とし、範囲43で示される方向の3次元空間が広がるような仮想現実空間画像を生成する。そのため、図5(b)に示すように、画像生成部14により生成される仮想現実空間画像の中には、他の視聴者111-2~111-6に関するアバター画像が存在する。また、各視聴者111-2~111-6のアバター画像は、視聴者111-1が存在する画面下端の中央位置を基準として、現実の相対位置関係を反映させた位置に、遠近感を反映させた大きさでそれぞれ表示される。 On the other hand, as shown in FIG. 4B, the direction detection unit 16 detects that the viewer 111 -1 is facing the direction of the arrow B (the direction 90 degrees to the left of the arrow A). case, the image generating unit 14, the position of the viewer 111 -1 to the center position of the bottom of the screen, to generate a virtual reality space image as spread three-dimensional space in the direction indicated by the range 43. Therefore, as shown in FIG. 5B, avatar images related to the other viewers 111 -2 to 111 -6 exist in the virtual reality space image generated by the image generation unit 14. In addition, the avatar images of the viewers 111 -2 to 111-6 reflect the perspective at the positions that reflect the actual relative positional relationship with the center position at the bottom of the screen where the viewers 111-1 exist as a reference. Each size is displayed.
 また、図4(c)に示すように、円弧の真中に位置する視聴者111-3が装着するHMD100-3の画像生成部14は、3次元空間データによって表現される3次元空間41の中心位置に視聴者111-3が存在するものとして把握する。また、視聴者111-3の位置を基準として、他の視聴者111-1~111-2,111-4~111-6が、相対位置データで示される現実の相対位置関係を反映させた位置にそれぞれ存在するものとして把握する。 Further, as shown in FIG. 4 (c), the image generation unit 14 of the viewer 111 -3 wears HMD 100 -3 positioned in an arc in the middle, the center of the three-dimensional space 41 which is represented by a three-dimensional spatial data It is understood that the viewer 111-3 exists at the position. With reference to the position of the viewer 111 -3, other viewers 111 -1 to 111 -2, the 111 -4 to 111 -6, reflecting the actual relative positional relationship indicated by the relative position data positions As if they existed in each.
 ここで、視聴者111-3が矢印Cの方向を向いていることが方向検出部16により検出された場合、画像生成部14は、視聴者111-3の位置を画面下端の中央位置とし、範囲44で示される方向の3次元空間が広がるような仮想現実空間画像を生成する。そのため、図5(c)に示すように、画像生成部14により生成される仮想現実空間画像の中には、他の視聴者111-1~111-2,111-5~111-6に関するアバター画像が存在する。また、各視聴者111-1~111-2,111-5~111-6のアバター画像は、視聴者111-3が存在する画面下端の中央位置を基準として、現実の相対位置関係を反映させた位置に、遠近感を反映させた大きさでそれぞれ表示される。 Here, if that viewer 111 -3 is facing the direction of arrow C is detected by the direction detecting unit 16, the image generation unit 14, the position of the viewer 111 -3 and the center position of the lower end of the screen, A virtual reality space image is generated so that the three-dimensional space in the direction indicated by the range 44 is expanded. Therefore, as shown in FIG. 5C, the virtual reality space images generated by the image generation unit 14 include avatars related to the other viewers 111 -1 to 111 -2 and 111 -5 to 111 -6. There is an image. In addition, the avatar images of the viewers 111 -1 to 111 -2 and 111 -5 to 111 -6 reflect the actual relative positional relationship with reference to the center position of the lower end of the screen where the viewer 111 -3 exists. Are displayed in a size reflecting the perspective.
 以上詳しく説明したように、本実施形態では、仮想現実空間に関する空間データと、現実空間に存在する複数の視聴者のうち自分以外の視聴者の分身であるアバターに関するアバターデータと、複数の視聴者の現実の相対位置関係を表す相対位置データとに基づいて、自分の位置を画面内所定位置とし、当該画面内所定位置を基準として、複数の視聴者の現実の相対位置関係を反映させた位置にアバター画像が存在する仮想現実空間画像を表示させるようにしている。 As described above in detail, in the present embodiment, the spatial data related to the virtual reality space, the avatar data related to the avatar that is a substitute for the viewer other than the self among the plurality of viewers existing in the real space, and the plurality of viewers Based on the relative position data representing the actual relative positional relationship, a position that reflects the actual relative positional relationship of a plurality of viewers with the user's own position as a predetermined position in the screen and the predetermined position in the screen as a reference A virtual reality space image in which an avatar image exists is displayed.
 このように構成した本実施形態によれば、現実空間に存在する複数の視聴者が装着したそれぞれのHMD100毎に、各視聴者の自分の位置を基準として、複数の視聴者の相対位置関係を反映させた位置にアバター画像が表示される。これにより、各HMD100に表示される仮想現実空間画像が、各視聴者にとって現実と同じ相対位置に他の視聴者のアバター画像が表示されたものとなり、より現実感のある仮想現実空間画像を提供することができる。 According to the present embodiment configured as described above, for each HMD 100 worn by a plurality of viewers existing in the real space, the relative positional relationship of the plurality of viewers is determined with reference to each viewer's own position. An avatar image is displayed at the reflected position. As a result, the virtual reality space image displayed on each HMD 100 is the one in which the avatar image of another viewer is displayed at the same relative position as the reality for each viewer, and provides a more realistic virtual reality space image. can do.
 なお、上記実施形態では、3次元空間データをHMD100の空間データ記憶部101にあらかじめ記憶しておく例について説明したが、本発明はこれに限定されない。例えば、3次元空間データについてもHMD100が外部コンピュータ200から取得するようにしてもよい。この場合も、3次元空間データ、アバターデータおよび相対位置データの取得を同期して行うことは必須でない。例えば、アバターデータおよび相対位置データをあらかじめ取得しておき、HMD100に仮想現実空間画像を表示させる際に、3次元空間データを外部コンピュータ200から取得しながら再生するようにしてもよい。 In the above embodiment, the example in which the three-dimensional spatial data is stored in advance in the spatial data storage unit 101 of the HMD 100 has been described, but the present invention is not limited to this. For example, the HMD 100 may acquire three-dimensional spatial data from the external computer 200. Also in this case, it is not essential to synchronize the acquisition of the three-dimensional space data, the avatar data, and the relative position data. For example, avatar data and relative position data may be acquired in advance, and when the virtual reality space image is displayed on the HMD 100, the three-dimensional space data may be reproduced while being acquired from the external computer 200.
 また、上記実施形態では、HMD100に本実施形態の仮想現実画像提供装置を実装する例について説明したが、本発明はこれに限定されない。例えば、外部コンピュータ200に本実施形態の仮想現実画像提供装置を実装するようにしてもよい。この場合、HMD100は、表示制御部15および方向検出部16を備える。一方、外部コンピュータ200は、空間データ記憶部101、アバターデータ記憶部102および相対位置データ記憶部103、空間データ取得部11、アバターデータ取得部12、相対位置データ取得部13および画像生成部14を備える。 In the above embodiment, the example in which the virtual reality image providing apparatus according to this embodiment is mounted on the HMD 100 has been described. However, the present invention is not limited to this. For example, the virtual reality image providing apparatus of this embodiment may be mounted on the external computer 200. In this case, the HMD 100 includes a display control unit 15 and a direction detection unit 16. On the other hand, the external computer 200 includes a spatial data storage unit 101, an avatar data storage unit 102 and a relative position data storage unit 103, a spatial data acquisition unit 11, an avatar data acquisition unit 12, a relative position data acquisition unit 13 and an image generation unit 14. Prepare.
 そして、外部コンピュータ200の空間データ取得部11、アバターデータ取得部12および相対位置データ取得部13が各記憶部101~103から3次元空間データ、アバターデータおよび相対位置データをそれぞれ取得し、画像生成部14が仮想現実空間画像を生成してHMD100に送信する。HMD100の表示制御部15は、外部コンピュータ200から送信されてきた仮想現実空間画像を表示制御部15が表示させる。また、HMD100は、方向検出部16により検出された視聴者の頭の方向を、通信ネットワーク300を介して外部コンピュータ200に通知する。外部コンピュータ200の画像生成部14は、通知された視聴者の頭の方向に応じて、生成する仮想現実空間画像を変更する。 Then, the spatial data acquisition unit 11, the avatar data acquisition unit 12, and the relative position data acquisition unit 13 of the external computer 200 respectively acquire 3D spatial data, avatar data, and relative position data from the storage units 101 to 103, and generate an image. The unit 14 generates a virtual reality space image and transmits it to the HMD 100. The display control unit 15 of the HMD 100 causes the display control unit 15 to display the virtual reality space image transmitted from the external computer 200. Further, the HMD 100 notifies the external computer 200 of the direction of the viewer's head detected by the direction detection unit 16 via the communication network 300. The image generation unit 14 of the external computer 200 changes the generated virtual reality space image according to the notified head direction of the viewer.
 また、上記実施形態では、アバターデータ取得部12が、現実空間に存在する複数の視聴者のうち自分以外の視聴者のアバターデータを取得する例について説明したが、本発明はこれに限定されない。例えば、アバターデータ取得部12が、自分も含めて現実空間に存在する全ての視聴者のアバターデータを取得し、画像生成部14が自分のアバター画像も含めて仮想現実空間画像を生成するようにしてもよい。 Moreover, although the said embodiment demonstrated the example in which the avatar data acquisition part 12 acquires avatar data of viewers other than himself among the some viewers which exist in real space, this invention is not limited to this. For example, the avatar data acquisition unit 12 acquires the avatar data of all viewers existing in the real space including itself, and the image generation unit 14 generates the virtual reality space image including the own avatar image. May be.
 また、上記実施形態において、複数の視聴者のアバター画像を現実の相対位置関係を反映させた位置に表示させることに加えて、現実の視聴者が向いている頭の方向を反映させてアバター画像を生成するようにしてもよい。例えば、図4(b)の例において、円弧の右端にいる視聴者111-6が、視聴者111-1がいる右側を向いていることが方向検出部16により検出されている場合、視聴者111-1のHMD100-1に表示させる仮想現実空間画像の中の視聴者111-6に対応するアバター画像を、視聴者111-1の方を向いた画像とする。 In the above embodiment, in addition to displaying the avatar images of a plurality of viewers at positions that reflect the actual relative positional relationship, the avatar images are reflected by reflecting the direction of the head that the actual viewers are facing. May be generated. For example, if in the example of FIG. 4 (b), the viewer 111 -6 who is at the right end of the arc, it has been detected by the direction detecting section 16 facing to the right there are the viewer 111 -1, viewers the avatar image corresponding to the viewer 111 -6 in the virtual reality space image to be displayed on the 111 -1 HMD 100 -1, and image facing the viewer 111 -1.
 これを実現するために、各HMD100は、方向検出部16により検出された視聴者の頭の方向を、通信ネットワーク300を介して外部コンピュータ200に通知する。外部コンピュータ200は、各HMD100から通知された視聴者の頭の方向を、通知元以外のHMD100に対して通信ネットワーク300を介して通知する。画像生成部14は、外部コンピュータ200から通知された他の視聴者の頭の方向を考慮して、当該他の視聴者のアバター画像を生成する。 In order to realize this, each HMD 100 notifies the external computer 200 of the direction of the viewer's head detected by the direction detection unit 16 via the communication network 300. The external computer 200 notifies the direction of the viewer's head notified from each HMD 100 to the HMD 100 other than the notification source via the communication network 300. The image generation unit 14 generates an avatar image of the other viewer in consideration of the head direction of the other viewer notified from the external computer 200.
 また、上記実施形態では、HMD100に対する仮想現実空間画像(アバター画像を含む)の表示についてのみ言及したが、当該画像の表示と共に音声をスピーカから出力することが可能であることは言うまでもない。この場合、複数の視聴者に共用される外部スピーカから音声を出力するようにしてもよいし、HMD100が搭載するスピーカから音声を出力するようにしてもよい。 In the above embodiment, only the display of the virtual reality space image (including the avatar image) with respect to the HMD 100 has been mentioned, but it goes without saying that the sound can be output from the speaker together with the display of the image. In this case, sound may be output from an external speaker shared by a plurality of viewers, or sound may be output from a speaker mounted on the HMD 100.
 図6は、本実施形態による仮想現実画像提供装置を適用したVR画像表示システムの変形例を示す図である。この図6は、仮想現実空間画像の表示に加えて音声の出力に関する構成を含めて示したものである。図6に示すVR画像表示システムは、複数の視聴者に共用される外部スピーカ400と、HMD100に搭載されるスピーカ(以下、搭載スピーカという)とを備えている。 FIG. 6 is a view showing a modification of the VR image display system to which the virtual reality image providing apparatus according to the present embodiment is applied. FIG. 6 shows a configuration related to audio output in addition to the display of the virtual reality space image. The VR image display system shown in FIG. 6 includes an external speaker 400 shared by a plurality of viewers and a speaker mounted on the HMD 100 (hereinafter referred to as a mounted speaker).
 ここでは図6(b)のように、複数の視聴者が存在する1つの物理的空間(例えば、部屋)の中の複数箇所に複数の外部スピーカ400-1~400-4を備える例を示している。HMD100が備える搭載スピーカは、視聴者がHMD100を装着したときに両耳付近に位置するように構成されたヘッドホン型のスピーカであってもよいし、両耳付近以外の場所に位置するように構成された小型スピーカであってもよい。また、図6(b)に示すように、HMD100は、ヘッドホン型のスピーカに加えてマイクを有するヘッドセットを備える構成としてもよい。 Here, as shown in FIG. 6B, an example in which a plurality of external speakers 400 -1 to 400 -4 are provided at a plurality of locations in one physical space (for example, a room) where a plurality of viewers exist is shown. ing. The on-board speaker included in the HMD 100 may be a headphone-type speaker configured to be positioned near both ears when a viewer wears the HMD 100, or may be configured to be positioned other than near both ears. It may be a small speaker. In addition, as shown in FIG. 6B, the HMD 100 may include a headset having a microphone in addition to a headphone type speaker.
 外部コンピュータ200は、仮想現実空間画像の表示と同期して音声を再生し、外部スピーカ400から出力する。複数の視聴者が装着する個々のHMD100も、仮想現実空間画像の表示と同期して音声を再生し、搭載スピーカから出力する。例えば、外部スピーカ400から主音声を出力するとともに、HMD100の搭載スピーカから副音声を出力することが考えられる。具体的には、演出効果の一例として、外部スピーカ400から大音量の主音声を出力する一方、HMD100の搭載スピーカから小音量の副音声を出力するといったことが考えられる。 The external computer 200 reproduces sound in synchronization with the display of the virtual reality space image and outputs it from the external speaker 400. The individual HMDs 100 worn by a plurality of viewers also reproduce sound in synchronization with the display of the virtual reality space image and output from the mounted speaker. For example, it is conceivable that the main sound is output from the external speaker 400 and the auxiliary sound is output from the speaker mounted on the HMD 100. Specifically, as an example of the effect, it is conceivable that a high volume main sound is output from the external speaker 400 while a low volume sub sound is output from the speaker mounted on the HMD 100.
 なお、HMD100の搭載スピーカから出力する副音声に係る音声データは、HMD100の空間データ記憶部101にあらかじめ記憶しておいてもよいし、再生時にHMD100が外部コンピュータ200から取得するようにしてもよい。あるいは、HMD100の搭載スピーカから出力する副音声に係る音声データは、あるHMD100のマイクから入力された視聴者の話者音声を外部コンピュータ200を介して他のHMD100に送信したものであってもよい。このようにすれば、視聴者は、外部スピーカ400から出力される主音声を聞くとともに、他の視聴者の話者音声をHMD100の搭載スピーカから副音声として聞くことができるようになる。 Note that the audio data related to the sub-audio output from the speaker mounted on the HMD 100 may be stored in advance in the spatial data storage unit 101 of the HMD 100, or the HMD 100 may acquire it from the external computer 200 during reproduction. . Alternatively, the audio data related to the secondary audio output from the speaker mounted on the HMD 100 may be data obtained by transmitting the speaker's speaker audio input from a microphone of a certain HMD 100 to another HMD 100 via the external computer 200. . In this way, the viewer can hear the main sound output from the external speaker 400 and can hear the speaker sound of other viewers as auxiliary sound from the speaker mounted on the HMD 100.
 あるHMD100のマイクから入力された視聴者の話者音声に係る音声データを他のHMD100に送信してヘッドホン型の搭載スピーカから出力する場合、その音声データの送信元が、搭載スピーカから話者音声を出力する他のHMD100から見て相対的に左側または右側のどちらに位置するかに応じて、左側のスピーカまたは右側のスピーカの何れか一方から話者音声を出力するようにしてもよい。 When audio data related to a viewer's speaker voice input from a microphone of a certain HMD 100 is transmitted to another HMD 100 and output from a headphone-type mounted speaker, the transmission source of the voice data is from the mounted speaker. The speaker voice may be output from either the left speaker or the right speaker depending on whether it is positioned relatively to the left side or the right side when viewed from another HMD 100 that outputs.
 例えば、図4(c)に示した円弧の中央の視聴者111-3が装着するHMD100-3の搭載スピーカから、円弧の左端に位置する視聴者111-1が装着するHMD100-1から送信された音声データに基づいて視聴者111-1の話者音声を出力する場合、視聴者111-1のHMD100-1は中央のHMD100-3から見て相対的に右側に位置するので、右側のスピーカのみから話者音声を出力するようにする。 For example, it is transmitted from the HMD 100 -1 worn by the viewer 111 -1 located at the left end of the arc from the speaker mounted on the HMD 100 -3 worn by the viewer 111 -3 at the center of the arc shown in FIG. When the speaker voice of the viewer 111 -1 is output based on the received voice data, the HMD 100 -1 of the viewer 111 -1 is positioned on the right side as viewed from the central HMD 100 -3. The speaker voice is output only from
 図4(c)に示した円弧の中央の視聴者111-3が装着するHMD100-3の搭載スピーカから、円弧の右端に位置する視聴者111-6が装着するHMD100-6から送信された音声データに基づいて視聴者111-6の話者音声を出力する場合、視聴者111-6のHMD100-6は中央のHMD100-3から見て相対的に左側に位置するので、左側のスピーカのみから話者音声を出力するようにする。 Audio transmitted from the HMD 100 -6 worn by the viewer 111 -6 located at the right end of the arc from the speaker mounted on the HMD 100 -3 worn by the viewer 111 -3 at the center of the arc shown in FIG. when outputting the speaker voice of the viewer 111 -6 based on the data, since the HMD 100 -6 viewers 111 -6 positioned relatively left side as viewed from the center of the HMD 100 -3, from only the left speaker The speaker voice is output.
 このようにすれば、実際に話者がいる方向から音声が聴こえてくるような臨場感を出すことが可能である。また、音声データの送信元のHMD100と送信先のHMD100との相対距離に応じて、出力する話者音声の音量を調整するようにしてもよい。このようにすれば、実際に話者がいる方向から、実際の相対距離に応じた音量で話者音声が聴こえてくるような感じになるので、臨場感を増すことが可能である。 In this way, it is possible to give a sense of realism that sounds can be heard from the direction in which the speaker is actually present. Further, the volume of the speaker voice to be output may be adjusted according to the relative distance between the HMD 100 that is the transmission source of the audio data and the HMD 100 that is the transmission destination. In this way, since the speaker voice can be heard from the direction where the speaker is actually present at a volume corresponding to the actual relative distance, the sense of reality can be increased.
 その他、上記実施形態は、何れも本発明を実施するにあたっての具体化の一例を示したものに過ぎず、これによって本発明の技術的範囲が限定的に解釈されてはならないものである。すなわち、本発明はその要旨、またはその主要な特徴から逸脱することなく、様々な形で実施することができる。 In addition, each of the above-described embodiments is merely an example of implementation in carrying out the present invention, and the technical scope of the present invention should not be construed in a limited manner. That is, the present invention can be implemented in various forms without departing from the gist or the main features thereof.
 11 空間データ取得部
 12 アバターデータ取得部
 13 相対位置データ取得部
 14 画像生成部
 14A 空間データ再生部
 14B アバター画像生成部
 15 表示制御部
 16 方向検出部
 100 HMD
 200 外部コンピュータ
DESCRIPTION OF SYMBOLS 11 Spatial data acquisition part 12 Avatar data acquisition part 13 Relative position data acquisition part 14 Image generation part 14A Spatial data reproduction | regeneration part 14B Avatar image generation part 15 Display control part 16 Direction detection part 100 HMD
200 External computer

Claims (5)

  1.  ヘッドマウントディスプレイに表示させる仮想現実空間画像を提供する仮想現実画像提供装置であって、
     仮想現実空間に関する空間データを取得する空間データ取得部と、
     現実空間に存在する複数の視聴者のうち少なくとも自分以外の視聴者の分身であるアバターに関するアバターデータを取得するアバターデータ取得部と、
     上記複数の視聴者の現実の相対位置関係を表す相対位置データを取得する相対位置データ取得部と、
     上記空間データ、上記アバターデータおよび上記相対位置データに基づいて、上記自分の位置を画面内所定位置とし、当該画面内所定位置を基準として、上記複数の視聴者の現実の相対位置関係を反映させた位置に上記アバター画像が存在する仮想現実空間画像を生成する画像生成部とを備えたことを特徴とする仮想現実画像提供装置。
    A virtual reality image providing device that provides a virtual reality space image to be displayed on a head mounted display,
    A spatial data acquisition unit for acquiring spatial data related to the virtual reality space;
    An avatar data acquisition unit that acquires avatar data related to an avatar that is at least a part of a viewer other than yourself among a plurality of viewers existing in the real space;
    A relative position data acquisition unit for acquiring relative position data representing an actual relative positional relationship of the plurality of viewers;
    Based on the spatial data, the avatar data, and the relative position data, the user's own position is set as a predetermined position in the screen, and the actual relative positional relationship of the plurality of viewers is reflected on the basis of the predetermined position in the screen. A virtual reality image providing apparatus comprising: an image generation unit configured to generate a virtual reality space image in which the avatar image is present at a predetermined position.
  2.  上記現実空間に存在する複数の視聴者の位置があらかじめ決められており、上記複数の視聴者の現実の相対位置関係を表す相対位置データがあらかじめ設定されており、
     上記相対位置データ取得部は、上記あらかじめ設定された上記相対位置データを取得することを特徴とする請求項1に記載の仮想現実画像提供装置。
    The positions of a plurality of viewers existing in the real space are determined in advance, and relative position data representing the actual relative positional relationship of the plurality of viewers is set in advance,
    2. The virtual reality image providing apparatus according to claim 1, wherein the relative position data acquisition unit acquires the preset relative position data.
  3.  上記空間データ取得部は、上記ヘッドマウントディスプレイが備える記憶媒体から上記空間データを取得し、
     上記アバターデータ取得部および上記相対位置データ取得部は、上記ヘッドマウントディスプレイが接続された外部コンピュータから上記アバターデータおよび上記相対位置データをそれぞれ取得することを特徴とする請求項1または2に記載の仮想現実画像提供装置。
    The spatial data acquisition unit acquires the spatial data from a storage medium provided in the head mounted display,
    The said avatar data acquisition part and the said relative position data acquisition part respectively acquire the said avatar data and the said relative position data from the external computer to which the said head mounted display was connected. Virtual reality image providing device.
  4.  上記空間データ取得部、上記アバターデータ取得部および上記相対位置データ取得部は、上記ヘッドマウントディスプレイが接続された外部コンピュータから上記空間データ、上記アバターデータおよび上記相対位置データをそれぞれ取得することを特徴とする請求項1または2に記載の仮想現実画像提供装置。 The spatial data acquisition unit, the avatar data acquisition unit, and the relative position data acquisition unit respectively acquire the spatial data, the avatar data, and the relative position data from an external computer connected to the head mounted display. The virtual reality image providing apparatus according to claim 1 or 2.
  5.  ヘッドマウントディスプレイに表示させる仮想現実空間画像を提供する仮想現実画像提供用プログラムであって、
     仮想現実空間に関する空間データを取得する空間データ取得手段、
     現実空間に存在する複数の視聴者のうち少なくとも自分以外の視聴者の分身であるアバターに関するアバターデータを取得するアバターデータ取得手段、
     上記複数の視聴者の現実の相対位置関係を表す相対位置データを取得する相対位置データ取得手段、および
     上記空間データ、上記アバターデータおよび上記相対位置データに基づいて、上記自分の位置を画面内所定位置とし、当該画面内所定位置を基準として、上記複数の視聴者の現実の相対位置関係を反映させた位置に上記アバター画像が存在する仮想現実空間画像を生成する画像生成手段
    としてコンピュータを機能させるための仮想現実画像提供用プログラム。
    A virtual reality image providing program for providing a virtual reality space image to be displayed on a head mounted display,
    Spatial data acquisition means for acquiring spatial data related to virtual reality space;
    Avatar data acquisition means for acquiring avatar data related to an avatar that is at least a part of a viewer other than yourself among a plurality of viewers existing in the real space;
    Relative position data acquisition means for acquiring relative position data representing the actual relative positional relationship of the plurality of viewers, and the user's own position on the screen based on the spatial data, the avatar data, and the relative position data The computer functions as an image generation unit that generates a virtual reality space image in which the avatar image is present at a position reflecting the actual relative positional relationship of the plurality of viewers with the predetermined position in the screen as a reference. Program for providing virtual reality images.
PCT/JP2018/015260 2017-04-28 2018-04-11 Virtual reality image provision device and virtual reality image provision program WO2018198777A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018563937A JP6506486B2 (en) 2017-04-28 2018-04-11 Apparatus for providing virtual reality image and program for providing virtual reality image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-090112 2017-04-28
JP2017090112 2017-04-28

Publications (1)

Publication Number Publication Date
WO2018198777A1 true WO2018198777A1 (en) 2018-11-01

Family

ID=63918228

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/015260 WO2018198777A1 (en) 2017-04-28 2018-04-11 Virtual reality image provision device and virtual reality image provision program

Country Status (2)

Country Link
JP (1) JP6506486B2 (en)
WO (1) WO2018198777A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1055257A (en) * 1996-08-09 1998-02-24 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional virtual space display method
JPH11252523A (en) * 1998-03-05 1999-09-17 Nippon Telegr & Teleph Corp <Ntt> Generator for virtual space image and virtual space system
JP2006025281A (en) * 2004-07-09 2006-01-26 Hitachi Ltd Information source selection system, and method
JP2017062720A (en) * 2015-09-25 2017-03-30 キヤノンマーケティングジャパン株式会社 Information processing device, information processing system, control method thereof, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1055257A (en) * 1996-08-09 1998-02-24 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional virtual space display method
JPH11252523A (en) * 1998-03-05 1999-09-17 Nippon Telegr & Teleph Corp <Ntt> Generator for virtual space image and virtual space system
JP2006025281A (en) * 2004-07-09 2006-01-26 Hitachi Ltd Information source selection system, and method
JP2017062720A (en) * 2015-09-25 2017-03-30 キヤノンマーケティングジャパン株式会社 Information processing device, information processing system, control method thereof, and program

Also Published As

Publication number Publication date
JP6506486B2 (en) 2019-04-24
JPWO2018198777A1 (en) 2019-06-27

Similar Documents

Publication Publication Date Title
US10410562B2 (en) Image generating device and image generating method
CN109691141B (en) Spatialization audio system and method for rendering spatialization audio
GB2534580A (en) Image processing
WO2017183319A1 (en) Robot and housing
US11758347B1 (en) Dynamic speech directivity reproduction
US11806621B2 (en) Gaming with earpiece 3D audio
WO2017183294A1 (en) Actuator device
JP6580516B2 (en) Processing apparatus and image determination method
JP6538003B2 (en) Actuator device
JP2023546839A (en) Audiovisual rendering device and method of operation thereof
JP2021512402A (en) Multi-viewing virtual reality user interface
JP2019208185A (en) Information processing unit and sound generation method
US11314082B2 (en) Motion signal generation
JP6487512B2 (en) Head mounted display and image generation method
WO2018198777A1 (en) Virtual reality image provision device and virtual reality image provision program
JP7053074B1 (en) Appreciation system, appreciation device and program
JP6615716B2 (en) Robot and enclosure
GB2558279A (en) Head mountable display system
WO2017183292A1 (en) Processing device and image determination method
JP7402185B2 (en) Low frequency interchannel coherence control
EP4451620A1 (en) Systems and methods for programmatically updating contexts for multi-user conferences
EP4421616A1 (en) Systems and methods for creating custom audio mixes for artificial reality environments
US20200302761A1 (en) Indicator modes
JP6518620B2 (en) Phase difference amplifier

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018563937

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18789874

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18789874

Country of ref document: EP

Kind code of ref document: A1