WO2022254585A1 - Virtual space experience system - Google Patents

Virtual space experience system Download PDF

Info

Publication number
WO2022254585A1
WO2022254585A1 PCT/JP2021/020889 JP2021020889W WO2022254585A1 WO 2022254585 A1 WO2022254585 A1 WO 2022254585A1 JP 2021020889 W JP2021020889 W JP 2021020889W WO 2022254585 A1 WO2022254585 A1 WO 2022254585A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
avatar
coordinates
virtual space
trigger event
Prior art date
Application number
PCT/JP2021/020889
Other languages
French (fr)
Japanese (ja)
Inventor
良哉 尾小山
Original Assignee
株式会社Abal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Abal filed Critical 株式会社Abal
Priority to PCT/JP2021/020889 priority Critical patent/WO2022254585A1/en
Priority to JP2021575439A priority patent/JP7055527B1/en
Publication of WO2022254585A1 publication Critical patent/WO2022254585A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual space bodily sensation system that allows the user to recognize that he or she exists in a virtual space displayed as an image.
  • a virtual space is generated by a server or the like, and a user recognizes an image of the virtual space through a head-mounted display (hereinafter sometimes referred to as "HMD"), and the user himself/herself exists in the virtual space.
  • HMD head-mounted display
  • This type of virtual space experience system recognizes the user's motion in the real space (for example, movement of coordinates, change of posture including orientation, etc.) by a motion capture device or the like, and responds to the user according to the recognized motion.
  • a motion capture device or the like There is an avatar that operates in a virtual space (see Patent Document 1, for example).
  • the present invention has been made in view of the above points, and provides a virtual space experience system that enables a plurality of users experiencing different environments in a virtual space to experience the same environment. With the goal.
  • the virtual space experience system of the present invention is a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist; an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space; a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space; Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user.
  • a virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
  • the avatar coordinate determination unit converts the coordinates of one of the first avatar and the second avatar to the coordinates of the other of the first avatar and the second avatar when occurrence of the first trigger event is recognized. It is characterized by moving to matching coordinates or adjacent coordinates.
  • the "image of the virtual space” includes images of the background of the virtual space, images of other avatars, images of objects that exist only in the virtual space, and images of objects that exist in the virtual space corresponding to the real space. Images, etc. are included.
  • the avatar coordinate determination unit sets the coordinates of one of the first avatar and the second avatar to It is moved to coordinates that match or are adjacent to the coordinates of the other of the second avatar.
  • the images of the virtual space recognized by the first user and the second user are determined based on the coordinates of the first avatar and the second avatar. Therefore, when the coordinates of the avatars match or are adjacent to each other, the images of the virtual space recognized by each user corresponding to those avatars are also matching or adjacent images (that is, similar images).
  • the images of the virtual space recognized by each user can also be matched or adjacent images.
  • the first user and the second user, who have experienced different environments in the virtual space until then, can now experience substantially the same environment.
  • the virtual space experience system of the present invention is a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist; an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space; a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space; Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user.
  • a virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
  • the virtual space image determining unit determines the image of the virtual space to be recognized by one of the first user and the second user when occurrence of the first trigger event is recognized.
  • the image is characterized in that the coordinates of one of the two avatars are moved to coordinates that match or are adjacent to the coordinates of the other of the first and second avatars.
  • the virtual space image determining unit selects the virtual space image to be recognized by one of the first user and the second user. is an image when the coordinates of one of the first and second avatars are moved to coordinates that match or are adjacent to the coordinates of the other of the first and second avatars.
  • the coordinates serving as the reference for determining the image in the virtual space to be recognized by one user are set to the coordinates of the avatar corresponding to that user. , coordinates that match or are adjacent to the coordinates of the avatar corresponding to the other user.
  • one user and the other user are determined based on the coordinates that match or are adjacent to the coordinates of the avatar corresponding to the other user. You will recognize the image in the virtual space.
  • the virtual space experience system of the present invention after the occurrence of the first trigger event is recognized, one user can create a Therefore, the first user and the second user, who have experienced different environments in the virtual space until then, can now experience substantially the same environment.
  • the avatar generation unit generates a ghost, which corresponds to one of the first user and the second user and is an avatar independent of the first avatar and the second avatar, in the virtual space;
  • the avatar coordinate determining unit determines the coordinates of the ghost based on coordinates obtained by causing a predetermined deviation in the coordinates of the first avatar or the second avatar corresponding to one of the first user and the second user.
  • the virtual space image determination unit adds information indicating that the image of the virtual space to be recognized by the other of the first user and the second user corresponds to one of the first user and the second user. including an image of the added ghost,
  • the first trigger event is a predetermined action performed on the ghost by the first avatar or the second avatar corresponding to the other of the first user and the second user.
  • the ⁇ information indicating that it is compatible'' to the user means, for example, direct information such as a message that is displayed at all times or at the request of the user, as well as a translucent shape of the corresponding avatar. It also includes indirect information indicated by adopting things as ghost shapes.
  • an avatar that corresponds to the user and serves as a reference for determining an image to be recognized by the user an avatar that corresponds to the user but is independent of the reference avatar (that is, the reference You may generate a ghost that is an avatar that is not a character).
  • the ghost is added with information indicating that it corresponds to a predetermined user. Therefore, other users can easily grasp which user the ghost corresponds to.
  • the "predetermined action” may be any action performed by the avatar based on the ghost. For example, when an avatar touches a ghost, when an avatar moves within a predetermined range based on a ghost, when an avatar operates an object in the virtual space, the ghost is selected as the target. (for example, an action of shooting a ghost with a camera-type object, etc.).
  • the trigger event recognition unit recognizes occurrence of a second trigger event
  • the avatar generation unit generates a first ghost, which is the ghost corresponding to the first user, in the virtual space when occurrence of the second trigger event is recognized
  • the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user.
  • the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the second user corresponds to the first user. including the image of the added first ghost;
  • the first trigger event is a predetermined action performed by the second avatar on the first ghost.
  • the coordinates of the first avatar are shifted from the coordinates of the first user by a predetermined amount. Even if the second user moves the second avatar corresponding to himself/herself so as not to contact the first avatar, the second user can contact the first user corresponding to the first avatar. There is a risk that it will be lost.
  • the second user can intuitively understand that the action causes the first trigger event that shares the environment with the first user. As a result, it is possible to prevent the second user's sense of immersion in the virtual space from being hindered due to the occurrence of the first trigger event.
  • the trigger event recognition unit recognizes occurrence of a second trigger event
  • the avatar generation unit generates a second ghost, which is the ghost corresponding to the second user, in the virtual space when occurrence of the second trigger event is recognized
  • the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user.
  • the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the first user corresponds to the second user. including the image of the added second ghost,
  • the first trigger event is a predetermined action performed by the first avatar on the second ghost.
  • the coordinates of the first avatar are shifted from the coordinates of the first user by a predetermined amount. Even if the first user moves the first avatar corresponding to himself/herself so as not to contact the second avatar, the second user corresponding to the second avatar is contacted. There is a risk that it will be lost.
  • the first user can intuitively understand that the action causes the first trigger event that shares the environment with the second user. As a result, it is possible to prevent the first user's sense of immersion in the virtual space from being hindered due to the occurrence of the first trigger event.
  • FIG. 1 is a schematic diagram showing a schematic configuration of a VR system according to a first embodiment
  • FIG. FIG. 2 is a block diagram showing the configuration of a processing unit of the VR system in FIG. 1; 2 is a flowchart showing processing executed by the VR system of FIG. 1 in normal use;
  • FIG. 2 is a schematic diagram showing states of a real space and a virtual space in normal use of the VR system of FIG. 1;
  • FIG. 2 is a flow chart showing processing executed when the VR system in FIG. 1 expands the virtual space and after that;
  • FIG. FIG. 2 is a schematic diagram showing the states of the real space and the virtual space when expanding the virtual space in the VR system of FIG. 1;
  • FIG. 2 is a schematic diagram showing the state of the real space and the virtual space after the virtual space is expanded in the VR system of FIG. 1; 4 is a flow chart showing processing in the first embodiment that is executed when and after the VR system of FIG. 1 synchronizes the environment;
  • FIG. 2 is a schematic diagram showing the states of the real space and the virtual space when the VR system in FIG. 1 synchronizes the environments;
  • FIG. 2 is a schematic diagram showing the state of the real space and the virtual space after the VR system of FIG. 1 has synchronized the environments;
  • 10 is a flowchart showing processing executed when the VR system of the second embodiment synchronizes environments;
  • FIG. 12 is a schematic diagram showing the states of the real space and the virtual space when the VR system in FIG. 11 synchronizes the environments;
  • FIG. 12 is a schematic diagram showing the state of the real space and the virtual space after the VR system of FIG. 11 has synchronized the environments; 12 is a schematic diagram of an image of the virtual space as perceived by the second user before the VR system of FIG. 11 synchronizes the environment; FIG. FIG. 12 is a schematic diagram of an image of the virtual space as perceived by the second user after the VR system of FIG. 11 has synchronized the environment;
  • a VR system S which is a virtual space sensation system according to the first embodiment, will be described below with reference to FIGS. 1 to 10.
  • FIG. 1 A VR system S, which is a virtual space sensation system according to the first embodiment, will be described below with reference to FIGS. 1 to 10.
  • a first user U1 and a second user U2 (hereinafter collectively referred to as “users U”) existing together in a predetermined area (for example, one room) of the physical space RS
  • the VR system S One virtual space VS corresponding to the area is made to recognize that they exist together through a first avatar A1 corresponding to the first user U1 and a second avatar A2 corresponding to the second user U2 (Fig. 4, etc.).
  • the number of users is assumed to be two in order to facilitate understanding.
  • the virtual space experience system of the present invention is not limited to such a configuration, and the number of users may be three or more.
  • a VR system S includes a plurality of markers 1 attached to a user U existing in a physical space RS, and a camera 2 that captures the user U (strictly speaking, the marker 1 attached to the user U). , a server 3 that determines the image and sound of the virtual space VS (see FIG. 4, etc.), and a head-mounted display (hereinafter referred to as "HMD 4") that allows the user to recognize the determined image and sound. .
  • HMD 4 head-mounted display
  • the camera 2, server 3, and HMD 4 can mutually transmit and receive information wirelessly, such as through the Internet network, public lines, and short-range wireless communication. However, any one of them may be configured to transmit and receive information to and from each other by wire.
  • a plurality of markers 1 are attached to the user U's head, both hands, and both feet via the HMD 4, gloves, and shoes that the user U wears. As will be described later, the plurality of markers 1 are used to recognize the coordinates and posture of the user U in the physical space RS (furthermore, actions (eg, movement of coordinates, changes in posture including orientation, etc.)). is. Therefore, depending on other devices that constitute the VR system S, the mounting position of the marker 1 may be changed as appropriate.
  • the camera 2 is installed so that the user U's operable range (that is, the range in which the user U can move, change the posture, etc.) in the physical space RS where the user U exists can be photographed from multiple directions.
  • the user U's operable range that is, the range in which the user U can move, change the posture, etc.
  • the server 3 recognizes the sign 1 from the image captured by the camera 2, and recognizes the coordinates and orientation of the user U based on the position of the recognized sign 1 in the physical space RS. Also, the server 3 determines images and sounds to be recognized by the user U based on the coordinates and orientation.
  • the HMD 4 is worn on the user U's head.
  • the HMD 4 includes a monitor 41 (virtual space image display) for allowing the user U to recognize the image of the virtual space VS determined by the server 3, and the sound of the virtual space VS determined by the server 3.
  • a speaker 42 (virtual space audio generator) is provided for recognition by the user U (see FIG. 2).
  • the user U When playing a game or the like using the VR system S, the user U recognizes only the images and sounds of the virtual space VS, and is made to recognize that the user U himself exists in the virtual space. That is, the VR system S is configured as a so-called immersive system.
  • the virtual space experience system of the present invention is not limited to such an immersive system. Therefore, for example, the configuration of the virtual space experience system of the present invention is applied to a system (so-called AR system) that superimposes and displays an image of the real space and an image of the virtual space to allow the user to recognize the augmented real space. You may
  • the VR system S is equipped with a so-called motion capture device composed of a sign 1, a camera 2, and a server 3 as a system for recognizing the coordinates of the user U in the real space RS.
  • the virtual space experience system of the present invention is not limited to such a configuration.
  • a motion capture device in addition to the configuration described above, a configuration in which the numbers of signs and cameras are different from those described above (for example, one each is provided) may be used.
  • a device that recognizes only the user's coordinates may be used instead of the motion capture device.
  • a sensor such as GPS may be installed in the HMD, and the user's coordinates, posture, etc. may be recognized based on the output from the sensor. Also, such a sensor may be used in combination with the motion capture device as described above.
  • the server 3 is composed of one or more electronic circuit units including CPU, RAM, ROM, interface circuits, and the like. As shown in FIG. 2, the server 3 includes a display image generation unit 31, a user information recognition unit 32, and a trigger event recognition unit 33 as functions (processing units) implemented by the installed hardware configuration or programs. , an avatar coordinate determination unit 34 , a virtual space image determination unit 35 , and a virtual space sound determination unit 36 .
  • the display image generation unit 31 generates an image to be recognized by the user U through the monitor 41 of the HMD 4 .
  • the display image generator 31 has a virtual space generator 31a, an avatar generator 31b, and a moving object generator 31c.
  • the virtual space generation unit 31a generates an image that serves as the background of the virtual space VS and an image of an object that exists in the virtual space VS.
  • the avatar generation unit 31b generates, in the virtual space VS, a first avatar A1 corresponding to the first user U1, and a first ghost G1, which is an avatar independent of the first avatar A1 (FIG. 6, etc.). reference). In addition, the avatar generation unit 31b generates a second avatar A2 and a second ghost G2, which is an avatar independent of the second avatar A2, corresponding to the second user U2 (see FIG. 6, etc.). .
  • the first avatar A1 and second avatar A2 (hereinafter collectively referred to as “avatar A”), and the first ghost G1 and second ghost G2 (hereinafter collectively referred to as “ghost G”) are , operate in the virtual space VS in response to the corresponding user U's actions in the physical space RS (that is, coordinate movement and posture change).
  • the moving body generation unit 31c generates a moving body in the virtual space VS that has no corresponding object in the real space RS and that can be connected to the avatar in the virtual space VS.
  • the “moving object” is an object that allows the user U to predict (whether consciously or unconsciously) the movement of the avatar that differs from the actual movement of the user when the avatar connects. If it is
  • moving objects include objects used for movement in real space such as elevators, logs flowing in a river that can be jumped on, floors that are likely to collapse when standing on them, jumping platforms, wings that assist jumping, etc. is applicable.
  • objects used for movement in real space such as elevators, logs flowing in a river that can be jumped on, floors that are likely to collapse when standing on them, jumping platforms, wings that assist jumping, etc. is applicable.
  • characters, patterns, and the like drawn on the ground and wall surfaces of the virtual space also correspond to moving objects.
  • connection between the avatar and the moving object means that the user can predict that the movement of the moving object, the change in the shape of the moving object, etc. will affect the coordinates of the avatar. .
  • an avatar getting into an elevator an avatar riding on a log flowing in a river, an avatar standing on a crumbling floor, an avatar standing on a jumping platform, an avatar wearing wings that assist jumping, etc. Applicable.
  • an avatar touches or approaches characters, patterns, or the like drawn on the ground or wall surface of the virtual space this also corresponds to connection.
  • Image data of the user U including the sign 1 captured by the camera 2 is input to the user information recognition unit 32 .
  • the user information recognition section 32 has a user posture recognition section 32a and a user coordinate recognition section 32b.
  • the user posture recognition unit 32a extracts the marker 1 from the input image data of the user U, and recognizes the posture of the user U based on the extraction result.
  • the user coordinate recognition unit 32b extracts the marker 1 from the input image data of the user U, and recognizes the coordinates of the user U based on the extraction result.
  • the trigger event recognition unit 33 recognizes that a predetermined trigger event has occurred or that the trigger event has been canceled when a condition predetermined by the system designer is satisfied.
  • the trigger event may be one that the user is unaware of. Therefore, as a trigger event, for example, events caused by user actions such as the user performing a predetermined action in the real space (that is, the avatar corresponding to the user performing a predetermined action in the virtual space) are also applicable. In addition, it also applies to events that are not caused by the user's actions, such as the passage of a predetermined period of time.
  • the avatar coordinate determination unit 34 determines the coordinates of the avatar corresponding to the user U in the virtual space VS based on the coordinates of the user U in the physical space RS recognized by the user coordinate recognition unit 32b.
  • the avatar coordinate determination unit 34 when the trigger event recognition unit 33 recognizes the occurrence of a predetermined trigger event, the avatar coordinate determination unit 34, regardless of the coordinates of the user, during a predetermined period or within a predetermined range, or within a predetermined period and within a predetermined range, Move the coordinates of the avatar corresponding to the user.
  • the avatar coordinate determination unit 34 sets the avatar A corresponding to the user U for a predetermined period, a predetermined range, or a predetermined period and a predetermined range.
  • the coordinates of the ghost G are determined based on the coordinates obtained by shifting the coordinates of .
  • the virtual space image determination unit 35 determines the image of the virtual space to be recognized by the user U corresponding to the avatar via the monitor 41 of the HMD 4.
  • the "image of the virtual space” includes images of other avatars, images of ghosts, images of objects that exist only in the virtual space, and images in the virtual space corresponding to the real space. Images of existing objects, etc. are included.
  • the image of the ghost G in this embodiment is added with information indicating that it corresponds to the user U corresponding to the ghost G.
  • the ⁇ information indicating that it is compatible'' to the user means, for example, direct information such as a message that is displayed at all times or at the request of the user, as well as a translucent shape of the corresponding avatar. It also includes indirect information indicated by adopting things as ghost shapes.
  • the virtual space audio determining unit 36 determines the audio to be recognized by the user U corresponding to the avatar via the speaker 42 of the HMD 4.
  • each processing unit that constitutes the virtual space experience system of the present invention is not limited to the configuration as described above.
  • part of the processing unit provided in the server 3 in the above embodiment may be provided in the HMD 4.
  • a plurality of servers may be used, or the servers may be omitted and the CPUs mounted on the HMDs may cooperate with each other.
  • a speaker other than the speaker mounted on the HMD may be provided.
  • devices that affect sight and hearing devices that affect smell and touch such as generating smells, wind, etc. according to the virtual space may also be included.
  • the display image generation unit 31 of the server 3 generates the virtual space VS, the first avatar A1 and the second avatar A2, and the moving object (FIG. 3/STEP 100).
  • the virtual space generation unit 31a of the display image generation unit 31 generates the virtual space VS and various objects existing in the virtual space VS. Also, the avatar generation unit 31b of the display image generation unit 31 generates a first avatar A1 corresponding to the first user U1 and a second avatar A2 corresponding to the second user U2. Further, the moving object generation unit 31c of the display image generation unit 31 generates a moving object such as an elevator VS1, which will be described later.
  • a first avatar A1 a second avatar A2
  • an elevator VS1 which is a moving body
  • a real space RS An object related to a trigger event such as a switch VS2 generated at a position corresponding to the whiteboard RS1 is placed.
  • the avatar coordinate determination unit 34 of the server 3 determines the coordinates and posture of the first avatar A1 in the virtual space VS based on the coordinates and posture of the first user U1 in the physical space RS, and The coordinates and orientation of the second avatar A2 in the virtual space VS are determined based on the coordinates and orientation in the physical space RS (FIG. 3/STEP 101).
  • the coordinates and orientation of the first user U1 and the second user U2 in the processing after STEP 101 are the coordinates and orientation recognized by the user information recognition unit 32 of the server 3 based on the image data captured by the camera 2. .
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS.
  • the image and sound to be recognized by the second user U2 are determined based on the coordinates and posture of the second avatar A2 in the virtual space VS (FIG. 3/STEP 102).
  • the HMD 4 worn by the user U causes the monitor 41 mounted on the HMD 4 to display the determined image, and the speaker 42 mounted on the HMD 4 to generate the determined sound (FIG. 3/ STEP 103).
  • the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 3/STEP 104).
  • the server 3 recognizes the signal instructing the end of the process. (FIG. 3/STEP 105).
  • the first avatar A1 corresponding to the first user U1, the second avatar A2 corresponding to the second user U2, the elevator VS1 which is a moving body, and the A plurality of objects are placed including a switch VS2, etc., to be set.
  • the first user U1 and the second user U2 can visualize themselves via the corresponding first avatar A1 and second avatar A2 by the image displayed and the sound generated by the HMD 4 they wear. It comes to recognize that it exists in space VS and can move freely.
  • the VR system S sets the correspondence relationship between the coordinates of the first user U1 and the coordinates of the first avatar A1, or the relationship between the coordinates of the first user U2 and the first avatar A1.
  • 2 avatar A2 is configured to perform a process of causing a deviation in the correspondence relationship.
  • the coordinates of the first avatar A1 corresponding to the first user U1 are displayed regardless of the movement of the coordinates of the first user U1. moves upwards. Specifically, the first avatar A1 moves from the first floor F1 defined in the virtual space VS to the second floor F2 by the elevator VS1.
  • the first user U1 to whom the corresponding first avatar A1 has been moved and the second user U2 who recognizes the first avatar A1 are extended in the vertical direction from the physical space RS.
  • the virtual space VS created can be recognized.
  • the first user U1 and the second user U2 may not be able to properly grasp their mutual positional relationship in the physical space RS. As a result, there is a possibility that the first user U1 and the second user U2 may come into contact with each other unintentionally.
  • the second user U2 may mistakenly think that the first user U1 is moving upward in the real space RS as well as the first avatar A1. Then, as shown in FIG. 7, the second user U2 misunderstands that the second avatar A2 corresponding to him/herself in the virtual space VS can move below the elevator VS1, and moves the second avatar A2 below the elevator VS1. In some cases, they try to move themselves, as if they were moving to another area.
  • the first user U1 corresponding to the first butter A1 actually exists at the same height as the second user U2. Therefore, in that case, the first user U1 and the second user U2 come into contact in the physical space RS. As a result, there is a possibility that the feeling of immersion in the virtual space VS of the first user U1 and the second user U2 may be hindered.
  • the virtual space VS is expanded so as to be wider than the real space RS corresponding to the virtual space VS, as described below.
  • Unintended contact between the first user U1 and the second user U2 is prevented by executing the process and the process for contact avoidance.
  • the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 5/STEP 200).
  • the trigger event recognizing unit 33 sets the coordinates of the first avatar A1 corresponding to the first user U1 in the virtual space VS so that the first user U1 gets on the elevator VS1, which is a moving body. , in the physical space RS, it is determined whether or not the coordinates of the first user U1 have moved.
  • the trigger event recognition unit 33 further causes the second avatar A2 corresponding to the second user U2 to touch the switch VS2 at a position near the switch VS2 in the virtual space VS. It is determined whether or not the second user U2 has moved to a predetermined posture (see FIG. 6).
  • the trigger event recognition unit 33 recognizes that a trigger event has occurred.
  • the avatar coordinate determination unit 34 of the server 3 moves the coordinates of the first avatar A1 (FIG. 5/STEP201).
  • the avatar coordinate determination unit 34 determines the coordinates of the first user U1 according to the content of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event. Move the coordinates of the first avatar A1 based on. This movement is performed independently of the movement of the coordinates of the first avatar A1 based on the movement of the coordinates of the first user U1. As a result, the correspondence relationship between the coordinates of the first user U1 and the coordinates of the first avatar A1 is deviated.
  • the first avatar A1 moves upward from the first floor F1 to the second floor F2 integrally with the elevator VS1 so that the state shown in FIG. 4 changes to the state shown in FIG. .
  • the display image generator 31 of the server 3 generates the first ghost G1 and the second ghost G2 (FIG. 5/STEP 202).
  • the avatar generation unit 31b of the display image generation unit 31 generates a first ghost G1 corresponding to the first user U1 in the virtual space VS using the occurrence of a trigger event as a trigger, and also generates a first ghost G1 corresponding to the first user U2. generates a second ghost G2 corresponding to .
  • the first ghost G1 is configured as a translucent avatar having the same shape as the first avatar A1 in order to indicate that it corresponds to the first user U1.
  • a first information board G1a which will be described later, is added to the first ghost G1.
  • the second ghost G2 is configured as a translucent avatar having the same shape as the second avatar A2 in order to indicate that it corresponds to the second user U2.
  • a second information board G2a is added.
  • the coordinates of the generated first ghost G1 are set to match the coordinates of the first avatar A1 immediately before the process of STEP201 is executed. Also, the generated coordinates of the second ghost G2 are set so as to be independent of the coordinates of the second user U2.
  • the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the first avatar A1 and the first ghost G1 based on the coordinates and posture of the first user U1 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 5/STEP 203).
  • the avatar coordinate determination unit 34 recognizes the occurrence of the trigger event according to the content of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event. By correcting the coordinates and orientation of the first avatar A1 determined by the processing up to the time when the trigger event occurs (the processing before the processing of STEP 201 is executed), the first avatar A1 after the occurrence of the trigger event is recognized. Determine the coordinates and posture of avatar A1.
  • the avatar coordinate determination unit 34 moves the coordinates of the first avatar A1 based on the coordinates of the first user U1 upward by the height of the second floor F2 relative to the first floor F1. The coordinates are determined as the coordinates of the first avatar A1 after the occurrence of the triggering event is recognized. Also, the avatar coordinate determining unit 34 determines the posture of the first avatar A1 based on the posture of the first user U1 as the posture of the first avatar A1 after the occurrence of the trigger event is recognized.
  • the avatar coordinate determining unit 34 uses the same process as the process for determining the coordinates and orientation of the first avatar A1 before the occurrence of the trigger event is recognized to determine the coordinates of the first user U1. and the orientation, determine the coordinates and orientation of the first ghost G1.
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 5/STEP 204).
  • the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the second avatar A2 and the second ghost G2 based on the coordinates and posture of the second user U2 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 5/STEP 205).
  • the avatar coordinate determination unit 34 uses the same processing as that for determining the coordinates and posture of the second avatar A2 so far, based on the coordinates and posture of the second user U2, to determine the 2 Determine the coordinates and posture of avatar A2.
  • the avatar coordinate determination unit 34 determines the timing at which the occurrence of the trigger event is recognized, according to the details of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event.
  • the coordinates and orientation of the second avatar A2 determined by the previous processing are corrected to determine the coordinates and orientation of the second ghost G2.
  • the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 based on the coordinates of the second user U2 upward by the height of the second floor F2 relative to the first floor F1. is determined as the coordinates of the second ghost G2 after the time when the occurrence of the trigger event was recognized.
  • the avatar coordinate determination unit 34 determines the posture of the second avatar A2 based on the posture of the second user U2 as it is as the posture of the second ghost G2 after the occurrence of the trigger event is recognized.
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the second user U2 based on the coordinates and posture of the second avatar A2 in the virtual space VS. (FIG. 5/STEP 206).
  • the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 5/STEP 207).
  • the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 5/STEP 208).
  • the server 3 recognizes a signal instructing the end of processing. It is determined whether or not it has been done (FIG. 5/STEP 209).
  • the image and sound of the virtual space VS recognized by the first user U1 include the image and sound of the second ghost G2.
  • the image and sound of the virtual space VS recognized by the second user U2 include the image and sound of the first ghost G1.
  • the first user U1 naturally moves to the first avatar A1 corresponding to the corrected coordinates of himself and the corrected coordinates of the second player when performing some action. It operates while avoiding contact with the corresponding second ghost G2.
  • the second user U2 naturally avoids contact between the second avatar A2 corresponding to his coordinates and the first ghost G1 corresponding to the coordinates of the first user U1. do.
  • the first ghost G1 is configured as a translucent avatar having the same shape as the first avatar A1
  • the second ghost G2 is configured as a translucent avatar having the same shape as the second avatar A2.
  • the first ghost G1 is added with the first information board G1a
  • the second ghost G2 is added with the second information board G2a.
  • the method of displaying information indicating which user the avatar corresponds to in the virtual space experience system of the present invention and the method of displaying information related to ghosts are not limited to such configurations. It may be set as appropriate.
  • the information board may be omitted by simply matching the shapes of the first ghost G1 and the second ghost G2 with the shapes of the first avatar A1 and the second avatar A2. Conversely, only the information board may be displayed without displaying shapes corresponding to the shapes of the first avatar A1 and the second avatar A2.
  • the VR system S When the VR system S recognizes the occurrence of a predetermined trigger event (the first trigger event in the present invention), the VR system S adds the first user U1 and the second user U2 to the environment of one of the first user U1 and the second user U2. It is configured to perform processing for synchronizing one environment of the
  • the “environment” refers to at least one of images and sounds recognized by the user corresponding to the avatar located at the coordinates of the virtual space.
  • environmental synchronization means that a plurality of users corresponding to a plurality of avatars come to recognize similar environments.
  • the second avatar A2 touching the first ghost G1 corresponding to the first user U1 is set as the trigger event.
  • the coordinates of the second avatar A2 corresponding to the second user U2 are moved to the second 1 Move to the coordinates adjacent to the coordinates of avatar A1.
  • the second avatar A2 moves from the first floor F1 to coordinates adjacent to the first avatar A1 on the second floor F2.
  • the environment of the first user U1 and the environment of the second user U2 are synchronized, and the first user U1 and the second user U2 can experience the same environment.
  • the first user U1 corresponding to the first avatar A1 wishes to view, together with the second user U2, the rainbow VS3 that exists above the second floor F2 of the virtual space VS.
  • a description will be given of the processing when the second user U2 tries to synchronize the environment with the first user U1 (that is, he himself tries to look at the rainbow VS3) when the second user U2 is notified via a message function or the like. .
  • the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 8/STEP 300).
  • the trigger event recognition unit 33 recognizes that the second avatar A2 corresponding to the second user U2 is the first avatar A1 corresponding to the first user U1 in the virtual space VS. It is determined whether or not it is in a posture to touch the . Then, when the second avatar A2 assumes a posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that a trigger event has occurred.
  • the avatar coordinate determination unit 34 of the server 3 moves the coordinates of the second avatar A2 to the coordinates adjacent to the coordinates of the first avatar A1 ( FIG. 5/STEP 301).
  • the avatar coordinate determination unit 34 refers not only to the coordinates of the first avatar A1 but also to the posture, and determines the coordinates located to the side, front, or rear of the first avatar A1 (that is, adjacent coordinates). Recognize and move the second avatar A2 to the coordinates.
  • This movement is performed independently of the movement of the coordinates of the second avatar A2 based on the movement of the coordinates of the second user U2. It should be noted that this causes a deviation in the corresponding relationship between the coordinates of the second user U2 and the coordinates of the second avatar A2.
  • the second avatar A2 located on the first floor F1 is located on the second floor F2 and is next to the first avatar A1 on the left, as shown in FIG. Momentarily move so as to be located in
  • the history of the movement (eg, the time it takes to move, the route during the movement, etc.) may be set as appropriate.
  • the movement speed of the second avatar A2 is slowed to some extent so that the movement route of the second avatar A2 can be recognized.
  • the third user it becomes possible for the third user to recognize the movement of the coordinates when the occurrence of the trigger event is recognized.
  • the display image generator 31 of the server 3 erases the first ghost G1 and the second ghost G2 (FIG. 8/STEP 302).
  • the movement of the second avatar A2 in STEP 301 eliminates the deviation in the corresponding relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2, which is caused by the process of expanding the virtual space. be. That is, after the movement, the correspondence relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2 is changed from the correspondence relationship before the deviation occurred (that is, the coordinates of the first user U1 and the coordinates of the second user U2). ).
  • the correspondence relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2 and the relationship between the coordinates of the first user U1 and the corresponding coordinates of the second user U2 are If there is no match, etc., at least one of the first ghost G1 and the second ghost G2 should continue to exist as required.
  • the avatar coordinate determination unit 34 of the server 3 determines the first avatar A1 and the first ghost based on the coordinates and postures in the physical space RS of the first user U1 and the second user U2 recognized by the user information recognition unit 32.
  • the coordinates and orientations of G1, the second avatar A2, and the second ghost G2 in the virtual space VS are determined (FIG. 8/STEP 303).
  • the avatar coordinate determination unit 34 continues to determine the coordinates and orientation determined based on the coordinates and orientation of the first user by determining the details of the deviation that occurred in the process of expanding the virtual space VS (i.e., correction direction and correction amount) to determine the coordinates and orientation of the first avatar A1.
  • the avatar coordinate determination unit 34 determines the coordinates and posture determined based on the coordinates and posture of the second user as the content of the deviation (that is, the correction direction) predetermined according to the type of the recognized trigger event. and correction amount) to determine the coordinates and posture of the second avatar A2.
  • the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 based on the coordinates of the second user U2 upward by the height of the second floor F2 relative to the first floor F1,
  • the horizontal movement from the coordinates of the second avatar A2 at the time when the occurrence of the trigger event is recognized to the coordinates to the left of the first avatar A1 is the coordinates after the time when the occurrence of the trigger event is recognized. 2 Determined as coordinates of avatar A2.
  • the avatar coordinate determination unit 34 determines the posture of the second avatar A2 based on the posture of the second user U2 as the posture of the second avatar A2 after the occurrence of the trigger event is recognized.
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS.
  • the image and sound to be recognized by the second user U2 are determined based on the coordinates and posture of the second avatar A2 in the virtual space VS (FIG. 8/STEP 304).
  • the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 8/STEP 305).
  • the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 8/STEP 306).
  • the server 3 recognizes the signal instructing the end of the process. It is determined whether or not it has been completed (FIG. 8/STEP 307).
  • the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 to coordinates adjacent to the coordinates of the first avatar A1. I am letting
  • the images and sounds of the virtual space VS recognized by the first user U1 and the second user U2 are determined based on the coordinates of the first avatar A1 and the second avatar A1. Therefore, when the coordinates of the avatars match or are adjacent to each other, the images and sounds in the virtual space VS recognized by each user corresponding to those avatars are also matched or similar images and sounds (that is, similar image and sound).
  • the images and sounds in the virtual space recognized by each user can be matched or adjacent to each other.
  • the first user U1 and the second user U2 who experienced different environments in the virtual space can now experience substantially the same environment.
  • the virtual space experience system of the present invention is not limited to such a configuration. Anything that synchronizes with the other environment of the second user may be used.
  • the coordinates of the second avatar A2 are set to the coordinates adjacent to the coordinates of the first avatar A1. It may be configured to move. That is, instead of the second avatar A2 moving itself to coordinates matching or adjacent to the coordinates of the first avatar A1, the first avatar A1 moves the coordinates of the second avatar A2 to coordinates matching or adjacent to its own coordinates. It may be configured to call to the coordinates that
  • the first avatar A1 when the second avatar A2 touches the first ghost G1 corresponding to the first user U1, the first avatar A1 itself has coordinates matching or adjacent to the coordinates of the second avatar A2.
  • the first avatar A1 may call the coordinates of the second avatar A2 to coordinates that match or are adjacent to the coordinates of the first avatar A1.
  • a configuration for performing a process of moving one's own coordinates to coordinates matching or adjacent to the coordinates of another user and a process of calling other users' coordinates to coordinates matching or adjacent to the coordinates of one's own are performed.
  • the action that becomes the trigger event may be made different according to the processing to be performed.
  • the process of moving the user's own coordinates to coordinates matching or adjacent to the coordinates of the other user is called to the coordinates that match or are adjacent to the own coordinates. It should be configured as follows.
  • processing is performed to move the coordinates of the second avatar A2 not to the coordinates that match the coordinates of the first avatar A1 but to the coordinates that are adjacent to them. .
  • This makes it easy for one user to include an avatar corresponding to the other user in the image recognized by one user when the coordinates of the second avatar A2 move, and for one user to move the other user closer to himself/herself. This is to make it easier for you to recognize what you have done.
  • the present invention is not limited to such a configuration, and the coordinates for moving one avatar may be matched with the coordinates of the other avatar. With this configuration, the degree of environmental synchronization can be further enhanced compared to the case of moving to adjacent coordinates.
  • the second avatar A2 continues to be located on the second floor F2. That is, the state in which the correspondence relationship between the coordinates of the second user U2 and the coordinates of the second avatar A2 continues to be deviated based on the deviation caused by the movement when the occurrence of the trigger event is recognized.
  • the present invention is not limited to such a configuration, and may be configured to eliminate the deviation caused by the movement when the occurrence of the trigger event is recognized after that.
  • a predetermined may be configured to move the avatar's coordinates for a period of time, and after the period elapses, return the avatar's coordinates to the coordinates before the occurrence of the triggering event was recognized.
  • a VR system S which is a virtual space experience system according to the second embodiment, will be described below with reference to FIGS. 2 and 11 to 15.
  • FIG. 1 A VR system S, which is a virtual space experience system according to the second embodiment, will be described below with reference to FIGS. 2 and 11 to 15.
  • the VR system S of this embodiment has the same configuration as the VR system S of the first embodiment, except for the processing that is executed when synchronizing the environment and after that.
  • the VR system S When the VR system S recognizes the occurrence of a predetermined trigger event (the first trigger event in the present invention), the VR system S adds the first user U1 and the second user U2 to the environment of one of the first user U1 and the second user U2. It is configured to perform processing for synchronizing one environment of the
  • touching the first ghost G1 corresponding to the first user U1 by the second avatar A2 is set as the trigger event.
  • the environment recognized by the second user U2 is recognized by the first user U1 regardless of the coordinates of the second user U2. be consistent with the environment in which they exist. Specifically, the environment recognized by the second user U2 changes from one based on the coordinates of the second avatar A2 (see FIG. 14) to one based on the coordinates of the first avatar A1 (see FIG. 15).
  • the environment of the first user U1 and the environment of the second user U2 are synchronized so that the first user U1 and the second user U2 can experience the same environment. ing.
  • the first user U1 corresponding to the first avatar A1 wishes to view, together with the second user U2, the rainbow VS3 that exists above the second floor F2 of the virtual space VS.
  • a description will be given of the processing when the second user U2 tries to synchronize the environment with the first user U1 (that is, he himself tries to look at the rainbow VS3) when the second user U2 is notified via a message function or the like. .
  • the second user U2 recognizes an image of the first avatar A1 on the second floor F2 of the virtual space VS looking up. It is assumed that there is
  • the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 11/STEP 400).
  • the trigger event recognizing unit 33 recognizes that the second avatar A2 corresponding to the second user U2 is the first avatar A1 corresponding to the first user U1 in the virtual space VS. It is determined whether or not it is in a posture to touch the . Then, when the second avatar A2 assumes a posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that a trigger event has occurred.
  • the avatar coordinate determination unit 34 of the server 3 fixes the coordinates of the second avatar A2 and the second ghost G2 (FIG. 11/STEP401).
  • the images and sounds recognized by the second user U2 corresponding to the second avatar A2 and the second ghost G2 is determined based on the coordinates of the first avatar A1.
  • the second avatar A2 is able to move while the process is being executed, the coordinates of the second avatar A2 will unintentionally move after the process for synchronizing the environments. Become. As a result, after the process is finished, there is a possibility that the sense of immersion of the second user U2, who recognizes the image and sound based on the coordinates of the second avatar A2, is disturbed.
  • the coordinates of the second ghost G2 may unintentionally move after the process for synchronizing the environments is completed. Become. As a result, after the process ends, the position of the second ghost G2 suddenly changes in the image recognized by the first user U1, which may hinder the first user U1's sense of immersion. .
  • the posture of the second avatar A2 may also be configured to match the posture of the first avatar A1. With this configuration, the image and sound recognized by the second user U2 are more consistent with the image and sound recognized by the first user U1. You will be able to experience a more synchronized environment.
  • the avatar coordinate determination unit 34 of the server 3 determines the coordinates and posture of the first user U1 in the physical space RS recognized by the user information recognition unit 32. , coordinates and orientations of the first avatar A1 and the first ghost G1 in the virtual space VS (FIG. 11/STEP 402).
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 and the second user U2 as the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 5/STEP 403).
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1. Also, the virtual space image determination unit 35 and the virtual space audio determination unit 36 determine the image and audio to be recognized by the second user U2 based on the coordinates of the first avatar A1 and the posture of the second avatar A2.
  • the image and voice determined in this manner and recognized by the first user U1 are based on the coordinates and posture of the first avatar A1, and are based on the second floor F2 of the virtual space VS, and are looking at the rainbow VS3 in the sky above. become a thing.
  • the image and voice determined in this manner to be recognized by the second user U2 are recognized when positioned at coordinates behind the coordinates of the first avatar A1 as viewed from the first avatar A1.
  • image and sound are recognized when positioned at coordinates behind the coordinates of the first avatar A1 as viewed from the first avatar A1.
  • image and sound are recognized when positioned at coordinates behind the coordinates of the first avatar A1 as viewed from the first avatar A1.
  • image and sound Specifically, as shown in FIG. 15, the rainbow VS3 in the sky is viewed from behind the first avatar A1 on the second floor F2 of the virtual space VS.
  • the first avatar A1 (and thus the first user U1) can be recognized in this process.
  • the coordinates that serve as references for the images and sounds that are recognized by the second user U2 are the coordinates behind the coordinates of the first avatar A1 (that is, the adjacent coordinates).
  • the second user U2 wants to experience an environment that is closer to the environment experienced by the first user U1, it is preferable to match the reference coordinates with the coordinates of the first avatar A1. . Also, when the reference posture is also matched with the first avatar A1, the second user U2 can experience the same environment as the first user U1 is experiencing.
  • the HMD 4 worn by the user U causes the monitor 41 mounted on the HMD 4 to display the determined image, and the speaker 42 mounted on the HMD 4 to generate the determined sound (FIG. 11/ STEP 404).
  • the trigger event recognition unit 33 of the server 3 determines whether or not it has recognized the cancellation of the trigger event (FIG. 11/STEP 405).
  • the trigger event recognition unit 33 determines that the second avatar A2 corresponding to the second user U2 touches the first avatar A1 corresponding to the first user U1 in the virtual space VS (see FIG. 12). ) is released. Then, when the second avatar A2 releases the posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that the trigger event has been released.
  • the release of the trigger event may be configured to be automatically recognized, for example, when a predetermined period of time has passed since the occurrence of the trigger event, or when another trigger event occurs (for example, The second avatar A2 may be configured to be recognized when the second avatar A2 is not touching the first ghost G1.
  • the avatar coordinate determination unit 34 of the server 3 releases the fixation of the coordinates of the second avatar A2 and the second ghost G2 (FIG. 11/STEP406). .
  • the avatar coordinate determination unit 34 of the server 3 determines that the user Based on the coordinates and orientation of the first user U1 in the physical space RS recognized by the information recognition unit 32, the coordinates and orientation of the first avatar A1 and the first ghost G1 in the virtual space VS are determined (FIG. 11/STEP 407).
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 11/STEP 408).
  • the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the second avatar A2 and the second ghost G2 based on the coordinates and posture of the second user U2 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 11/STEP 409).
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the second user U2 based on the coordinates and posture of the second avatar A2 in the virtual space VS. (FIG. 11/STEP 410).
  • the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 11/STEP 411).
  • the server 3 determines whether or not it has recognized a signal instructing the end of processing (FIG. 11/STEP 412).
  • the virtual space image determining unit 35 and the virtual space audio determining unit 36 when the occurrence of the trigger event is recognized, generate images and sounds of the virtual space VS to be recognized by the second user U2. is the image and sound when the coordinates of the second avatar A2 move to the coordinates adjacent to the coordinates of the first avatar A1.
  • the coordinates serving as the reference for determining the image and sound of the virtual space VS to be recognized by the second user U2 are set to the second avatar A2 corresponding to the second user U2.
  • the coordinates are adjacent to the coordinates of the first avatar A1 corresponding to the first user U1 regardless of the position of the coordinates of .
  • the second user U2 is determined together with the first user U1 based on the coordinates adjacent to the coordinates of the first avatar A1 corresponding to the first user U1. Images and sounds in the virtual space VS are recognized.
  • the second user U2 appears in the virtual space determined based on coordinates adjacent to the coordinates of the first avatar A1 corresponding to the first user U1 Since the images and sounds of the space VS are recognized, the first user and the second user who have experienced different environments in the virtual space VS can now experience substantially the same environment.
  • the virtual space experience system of the present invention is not limited to such a configuration. Anything that synchronizes with the other environment of the second user may be used.
  • the image and voice recognized by the first user U1 are moved to the coordinates of the second avatar A2.
  • the determination may be based on matching coordinates or adjacent coordinates.
  • the second user U2 synchronizes his environment with the environment of the first user U1 who is another user. may be configured.
  • the image and voice recognized by the second user U2 are transferred to the first avatar A1.
  • the first avatar A1 touches the second ghost G2 corresponding to the second user U2
  • the image and voice recognized by the second user U2 are transferred to the first avatar A1.
  • the second avatar A2 touches the first ghost G1 corresponding to the first user U1
  • the image and sound recognized by the first user U1 are changed to coordinates matching the coordinates of the second avatar A2 or adjacent to the coordinates of the second avatar A2. It may be configured so as to be determined using the coordinates as a reference.
  • a process of synchronizing the user's environment with the environment of the other user is executed, and the user can respond to the user itself. It is preferable to configure such that when an avatar corresponding to another user is touched with the avatar's left hand, a process of synchronizing the other user's environment with the own environment is executed.
  • a configuration for performing processing for synchronizing the environments by moving the coordinates as in the first embodiment and a configuration for performing processing for synchronizing the environments by changing the reference coordinates as in the present embodiment may be combined.
  • the action that becomes the trigger event should be changed according to the process to be performed.
  • the coordinates are moved to synchronize the environment, and the avatar corresponding to the self touches the hand of the avatar corresponding to the other user.
  • the shoulder of the avatar corresponding to the user it may be configured such that processing for synchronizing the environment is performed by changing the reference coordinates.
  • the VR system S which is a virtual space experience system, provides a first user U1 and a second user U2 who are present together in one room in the real space RS.
  • a single virtual space VS corresponding to a room is configured to recognize each other through a first avatar A1 corresponding to a first user U1 and a second avatar A2 corresponding to a second user U2.
  • the virtual space experience system of the present invention is not limited to such a configuration, as long as avatars corresponding to multiple users can exist in the virtual space at the same time.
  • the first user and the second user may exist in different areas of the physical space (eg, their own rooms).
  • the virtual space experience system of the present invention is not limited to the configuration in which the process of synchronizing the environment is executed only when the process of expanding the virtual space is executed.
  • user can also recognize that they are experiencing some kind of virtual space. Therefore, for example, avatars corresponding to each user do not have to exist in the same virtual space.
  • the virtual space in which the concierge counter exists may be the virtual space in which the store exists, or may be a virtual space different from the virtual space in which the store exists.
  • the second avatar moves from the concierge counter to the coordinates of the first avatar corresponding to the first user who caused the first trigger event among the plurality of first users. and move.
  • the viewpoint of one avatar is configured to move to coordinates that match or are adjacent to the coordinates of the other avatar as in the second embodiment.
  • the image displayed to the second user corresponding to the second avatar is selected from the image of the concierge counter to display the first trigger of the plurality of first users.
  • the image changes to be similar to the image displayed to the first user who caused the event.
  • the first trigger event is, for example, an action in which the first avatar presses a call button, which is an object generated in the virtual space, or an action in which the second avatar presses a move button. etc.
  • the second avatar A2 corresponding to the second user U2 corresponds to the first user U1, and is the first ghost located at the coordinates where the correspondence relation with the coordinates of the first user U1 is deviated.
  • the action of touching G1 is taken as the first trigger event.
  • the first avatar A1 corresponding to the first user U1 contacts the second ghost G2 corresponding to the second user U2 and located at coordinates that are deviated from the coordinates of the second user U2. can also be adopted as the first trigger event.
  • the first trigger event in the present invention is not limited to such a configuration, and may be appropriately set by a system designer or the like.
  • the first trigger event may be generated by touching an object (eg, a movement switch, a call switch, etc.) generated in the virtual space.
  • the first trigger event may be generated when the avatar performs a specific action (eg, beckoning action).
  • the trigger event is the action of the avatar corresponding to one user touching the ghost corresponding to the other user.
  • the first trigger event in the present invention is not limited to such a configuration, and may be a predetermined action performed by the avatar corresponding to one user on the ghost corresponding to the other user. .
  • the "predetermined action” may be any action performed by the avatar based on the ghost.
  • the action in which the avatar contacts the ghost as in this embodiment, the action in which the avatar moves within a predetermined range based on the ghost, and when the avatar operates an object existing in the virtual space.
  • Another example is an operation of selecting a ghost as a target (for example, an operation of photographing a ghost with a camera-type object, etc.).
  • the ghost generated during the process of expanding the virtual space is used as a key for generating the trigger event.
  • the ghost in the present invention is not limited to such a configuration, and may be an avatar that corresponds to the user and does not serve as a reference for images and sounds recognized by the user.
  • an avatar that is independent of the avatar may be adopted as a ghost.
  • an avatar corresponding to a dance instructor an avatar corresponding to a student, and a miniature avatar corresponding to a student, which are present at the hands of the avatar corresponding to the instructor, are generated.
  • the miniature avatar may be employed as a ghost when the lecturer confirms the student's condition using the miniature avatar.
  • the instructor will be able to observe the student's condition from his/her own objective point of view and, if necessary, from the student's subjective point of view.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provided is a virtual space experience system that enables a plurality of users who have experienced different environments in a virtual space to experience the same environment. When the occurrence of a trigger event is recognized, an avatar coordinate determination unit 34 of a VR system S moves the coordinates of one among a first avatar A1 and a second avatar A2 to coordinates that match or are adjacent to the coordinates of the other among the first avatar A1 and the second avatar A2.

Description

仮想空間体感システムVirtual space sensory system
 本発明は、画像として表示された仮想空間に、ユーザ自身が存在していると認識させるための仮想空間体感システムに関する。 The present invention relates to a virtual space bodily sensation system that allows the user to recognize that he or she exists in a virtual space displayed as an image.
 従来、サーバ等で仮想空間を生成し、ヘッドマウントディスプレイ(以下、「HMD」ということがある。)を介して、その仮想空間の画像をユーザに認識させ、ユーザ自身がその仮想空間に存在していると認識させる仮想空間体感システムがある。 Conventionally, a virtual space is generated by a server or the like, and a user recognizes an image of the virtual space through a head-mounted display (hereinafter sometimes referred to as "HMD"), and the user himself/herself exists in the virtual space. There is a virtual space sensory system that makes you realize that you are there.
 この種の仮想空間体感システムは、モーションキャプチャー装置等によってユーザの現実空間における動作(例えば、座標の移動、向きを含む姿勢の変化等)を認識し、その認識した動作に応じて、ユーザに対応するアバターを仮想空間で動作させるものがある(例えば、特許文献1参照)。 This type of virtual space experience system recognizes the user's motion in the real space (for example, movement of coordinates, change of posture including orientation, etc.) by a motion capture device or the like, and responds to the user according to the recognized motion. There is an avatar that operates in a virtual space (see Patent Document 1, for example).
 特許文献1に記載の仮想空間体感システムでは、複数のユーザの各々に対応するようにして、同一の仮想空間に複数のアバターを生成している。そして、この仮想空間体感システムでは、各ユーザに認識させる仮想空間の画像は、自らに対応したアバターの座標に基づいて決定される。すなわち、同時に仮想空間を体感している場合であっても、ユーザごとに視認している仮想空間の画像(ひいては、体感している仮想空間の環境)は異なる。 In the virtual space experience system described in Patent Document 1, multiple avatars are generated in the same virtual space so as to correspond to each of multiple users. In this virtual space bodily sensation system, the image of the virtual space recognized by each user is determined based on the coordinates of the avatar corresponding to the user. That is, even when users are experiencing virtual spaces at the same time, the image of the virtual space being visually recognized (and the environment of the virtual space being experienced) is different for each user.
特開2010157461号公報Japanese Unexamined Patent Application Publication No. 2010157461
 ところで、複数のユーザが同時に仮想空間を体感しており、且つ、ユーザごとに体感している仮想空間の環境が異なっている場合、一方のユーザが、自らの体感している環境を、他方のユーザにも体感してほしいと考える場合がある。また逆に、一方のユーザが、他方のユーザが体感している環境を、自らが体感したいと考える場合もある。 By the way, when a plurality of users experience the virtual space at the same time, and the environment of the virtual space experienced by each user is different, one user may experience the environment experienced by the other user. There are times when we want the user to experience it as well. Conversely, one user may want to experience the environment experienced by the other user.
 本発明は以上の点に鑑みてなされたものであり、仮想空間で異なる環境を体感していた複数のユーザが、同様の環境を体感することができるようになる仮想空間体感システムを提供することを目的とする。 SUMMARY OF THE INVENTION The present invention has been made in view of the above points, and provides a virtual space experience system that enables a plurality of users experiencing different environments in a virtual space to experience the same environment. With the goal.
 本発明の仮想空間体感システムは、
 第1ユーザ及び第2ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
 前記第1ユーザに対応する第1アバター、及び、前記第2ユーザに対応する第2アバターを、前記仮想空間に生成するアバター生成部と、
 前記現実空間における前記第1ユーザの座標及び前記第2ユーザの座標を認識するユーザ座標認識部と、
 前記第1ユーザの座標に基づいて、前記仮想空間における前記第1アバターの座標を決定し、前記第2ユーザの座標に基づいて、前記仮想空間における前記第2アバターの座標を決定するアバター座標決定部と、
 前記第1アバターの座標及び前記第2アバターの座標に基づいて、前記第1ユーザ及び前記第2ユーザに認識させる前記仮想空間の画像を決定する仮想空間画像決定部と、
 第1トリガーイベントの発生を認識するトリガーイベント認識部と、
 前記第1ユーザ及び前記第2ユーザに、前記仮想空間の画像を認識させる仮想空間画像表示器とを備えている仮想空間体感システムにおいて、
 前記アバター座標決定部は、前記第1トリガーイベントの発生が認識された際に、前記第1アバター及び前記第2アバターの一方の座標を、前記第1アバター及び前記第2アバターの他方の座標に一致する座標又は隣接する座標に移動させることを特徴とする。
The virtual space experience system of the present invention is
a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist;
an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space;
a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space;
Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user. Department and
a virtual space image determination unit that determines an image of the virtual space to be recognized by the first user and the second user based on the coordinates of the first avatar and the coordinates of the second avatar;
a trigger event recognition unit that recognizes the occurrence of the first trigger event;
A virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
The avatar coordinate determination unit converts the coordinates of one of the first avatar and the second avatar to the coordinates of the other of the first avatar and the second avatar when occurrence of the first trigger event is recognized. It is characterized by moving to matching coordinates or adjacent coordinates.
 ここで、「仮想空間の画像」には、仮想空間の背景の画像の他、他のアバターの画像、仮想空間にのみ存在するオブジェクトの画像、現実空間に対応して仮想空間に存在するオブジェクトの画像等が含まれる。 Here, the "image of the virtual space" includes images of the background of the virtual space, images of other avatars, images of objects that exist only in the virtual space, and images of objects that exist in the virtual space corresponding to the real space. Images, etc. are included.
 このように、本発明の仮想空間体感システムでは、第1トリガーイベントの発生が認識された際には、アバター座標決定部は、第1アバター及び第2アバターの一方の座標を、第1アバター及び第2アバターの他方の座標に一致する座標又は隣接する座標に移動させている。 As described above, in the virtual space experience system of the present invention, when the occurrence of the first trigger event is recognized, the avatar coordinate determination unit sets the coordinates of one of the first avatar and the second avatar to It is moved to coordinates that match or are adjacent to the coordinates of the other of the second avatar.
 ここで、第1ユーザ及び第2ユーザに認識させる仮想空間の画像は、第1アバター及び第2アバターの座標に基づいて決定される。そのため、アバター同士の座標が一致又は隣接している場合には、それらのアバターに対応する各々のユーザの認識する仮想空間の画像も、一致又は隣接した画像(すなわち、同様の画像)になる。 Here, the images of the virtual space recognized by the first user and the second user are determined based on the coordinates of the first avatar and the second avatar. Therefore, when the coordinates of the avatars match or are adjacent to each other, the images of the virtual space recognized by each user corresponding to those avatars are also matching or adjacent images (that is, similar images).
 したがって、本発明の仮想空間体感システムによれば、第1トリガーイベントの発生が認識された際には、各々のユーザの認識する仮想空間の画像も一致又は隣接した画像にすることができるので、それまで仮想空間で異なる環境を体感していた第1ユーザ及び第2ユーザが、ほぼ同じ環境を体感することができるようになる。 Therefore, according to the virtual space experience system of the present invention, when the occurrence of the first trigger event is recognized, the images of the virtual space recognized by each user can also be matched or adjacent images. The first user and the second user, who have experienced different environments in the virtual space until then, can now experience substantially the same environment.
 また、本発明の仮想空間体感システムは、
 第1ユーザ及び第2ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
 前記第1ユーザに対応する第1アバター、及び、前記第2ユーザに対応する第2アバターを、前記仮想空間に生成するアバター生成部と、
 前記現実空間における前記第1ユーザの座標及び前記第2ユーザの座標を認識するユーザ座標認識部と、
 前記第1ユーザの座標に基づいて、前記仮想空間における前記第1アバターの座標を決定し、前記第2ユーザの座標に基づいて、前記仮想空間における前記第2アバターの座標を決定するアバター座標決定部と、
 前記第1アバターの座標及び前記第2アバターの座標に基づいて、前記第1ユーザ及び前記第2ユーザに認識させる前記仮想空間の画像を決定する仮想空間画像決定部と、
 第1トリガーイベントの発生を認識するトリガーイベント認識部と、
 前記第1ユーザ及び前記第2ユーザに、前記仮想空間の画像を認識させる仮想空間画像表示器とを備えている仮想空間体感システムにおいて、
 前記仮想空間画像決定部は、前記第1トリガーイベントの発生が認識された際に、前記第1ユーザ及び前記第2ユーザの一方に認識させる前記仮想空間の画像を、前記第1アバター及び前記第2アバターの一方の座標が前記第1アバター及び前記第2アバターの他方の座標に一致する座標又は隣接する座標に移動した場合における画像にすることを特徴とする。
In addition, the virtual space experience system of the present invention is
a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist;
an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space;
a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space;
Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user. Department and
a virtual space image determination unit that determines an image of the virtual space to be recognized by the first user and the second user based on the coordinates of the first avatar and the coordinates of the second avatar;
a trigger event recognition unit that recognizes the occurrence of the first trigger event;
A virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
The virtual space image determining unit determines the image of the virtual space to be recognized by one of the first user and the second user when occurrence of the first trigger event is recognized. The image is characterized in that the coordinates of one of the two avatars are moved to coordinates that match or are adjacent to the coordinates of the other of the first and second avatars.
 このように、本発明の仮想空間体感システムでは、第1トリガーイベントの発生が認識された際には、仮想空間画像決定部は、第1ユーザ及び第2ユーザの一方に認識させる仮想空間の画像を、第1アバター及び第2アバターの一方の座標が第1アバター及び第2アバターの他方の座標に一致する座標又は隣接する座標に移動した場合における画像にしている。 As described above, in the virtual space sensation system of the present invention, when the occurrence of the first trigger event is recognized, the virtual space image determining unit selects the virtual space image to be recognized by one of the first user and the second user. is an image when the coordinates of one of the first and second avatars are moved to coordinates that match or are adjacent to the coordinates of the other of the first and second avatars.
 すなわち、第1トリガーイベントの発生が認識された際には、一方のユーザに認識させる仮想空間の画像を決定するための基準となる座標を、そのユーザに対応するアバターの座標の位置に関係なく、他方のユーザに対応するアバターの座標に一致する座標又はそれに隣接する座標としている。 That is, when the occurrence of the first trigger event is recognized, the coordinates serving as the reference for determining the image in the virtual space to be recognized by one user are set to the coordinates of the avatar corresponding to that user. , coordinates that match or are adjacent to the coordinates of the avatar corresponding to the other user.
 これにより、第1トリガーイベントの発生が認識された際には、一方のユーザは、他方のユーザとともに、他方のユーザに対応するアバターの座標に一致する座標又は隣接する座標に基づいて決定された仮想空間の画像を認識することになる。 Thereby, when the occurrence of the first trigger event is recognized, one user and the other user are determined based on the coordinates that match or are adjacent to the coordinates of the avatar corresponding to the other user. You will recognize the image in the virtual space.
 したがって、本発明の仮想空間体感システムによれば、第1トリガーイベントの発生が認識された後には、一方のユーザは、他方のユーザに対応するアバターの座標に一致する座標又は隣接する座標に基づいて決定された仮想空間の画像を認識することになるので、それまで仮想空間で異なる環境を体感していた第1ユーザ及び第2ユーザが、ほぼ同じ環境を体感することができるようになる。 Therefore, according to the virtual space experience system of the present invention, after the occurrence of the first trigger event is recognized, one user can create a Therefore, the first user and the second user, who have experienced different environments in the virtual space until then, can now experience substantially the same environment.
 また、本発明の仮想空間体感システムにおいては、
 前記アバター生成部は、前記第1ユーザ及び前記第2ユーザの一方に対応し、前記第1アバター及び前記第2アバターとは独立したアバターであるゴーストを、前記仮想空間に生成し、
 前記アバター座標決定部は、前記第1ユーザ及び前記第2ユーザの一方に対応する前記第1アバター又は前記第2アバターの座標に所定のズレを生じさせた座標に基づいて、前記ゴーストの座標を決定し、
 前記仮想空間画像決定部は、前記第1ユーザ及び前記第2ユーザの他方に認識させる前記仮想空間の画像に、前記第1ユーザ及び前記第2ユーザの一方に対応している旨を示す情報を付加した前記ゴーストの画像を含ませ、
 前記第1トリガーイベントは、前記第1ユーザ及び前記第2ユーザの他方に対応する前記第1アバター又は前記第2アバターが前記ゴーストに対して実行する所定の動作であることが好ましい。
Further, in the virtual space experience system of the present invention,
The avatar generation unit generates a ghost, which corresponds to one of the first user and the second user and is an avatar independent of the first avatar and the second avatar, in the virtual space;
The avatar coordinate determining unit determines the coordinates of the ghost based on coordinates obtained by causing a predetermined deviation in the coordinates of the first avatar or the second avatar corresponding to one of the first user and the second user. decide and
The virtual space image determination unit adds information indicating that the image of the virtual space to be recognized by the other of the first user and the second user corresponds to one of the first user and the second user. including an image of the added ghost,
Preferably, the first trigger event is a predetermined action performed on the ghost by the first avatar or the second avatar corresponding to the other of the first user and the second user.
 ここで、ユーザに「対応している旨を示す情報」とは、例えば、常時又はユーザの要求に応じて表示されるメッセージといった直接的な情報の他、対応するアバターの形状を半透明にしたものをゴーストの形状として採用することによって示される間接的な情報も含む。 Here, the ``information indicating that it is compatible'' to the user means, for example, direct information such as a message that is displayed at all times or at the request of the user, as well as a translucent shape of the corresponding avatar. It also includes indirect information indicated by adopting things as ghost shapes.
 仮想空間にユーザに対応するアバターを生成する場合、ユーザ1人に対して必ずしも1つのアバターのみが生成されるわけではない。そこで、ユーザに対応し、且つ、ユーザに認識させる画像を決定する際の基準となるアバターの他に、ユーザに対応しているが、その基準となるアバターとは独立したアバター(すなわち、その基準とはならないアバター)であるゴーストを生成しても構わない。 When creating an avatar corresponding to a user in the virtual space, it is not necessarily the case that only one avatar is created for each user. Therefore, in addition to an avatar that corresponds to the user and serves as a reference for determining an image to be recognized by the user, an avatar that corresponds to the user but is independent of the reference avatar (that is, the reference You may generate a ghost that is an avatar that is not a character).
 ここで、本発明の仮想空間体感システムでは、ゴーストに、所定のユーザに対応している旨を示す情報が付加されている。そのため、他のユーザは、そのゴーストがどのユーザに対応するものであるのかを、容易に把握することができるようになっている。 Here, in the virtual space bodily sensation system of the present invention, the ghost is added with information indicating that it corresponds to a predetermined user. Therefore, other users can easily grasp which user the ghost corresponds to.
 そのうえで、アバターがそのゴーストに対して実行する所定の動作を第1トリガーイベントとして設定すると、アバターに対応するユーザは、その動作を実行した結果、ゴーストに対応するユーザの環境を自らも体感することができることを、直感的に理解することができるようになる。 Then, when a predetermined action to be performed by the avatar on the ghost is set as a first trigger event, the user corresponding to the avatar can experience the environment of the user corresponding to the ghost as a result of executing the action. You will be able to intuitively understand what you can do.
 ここで、「所定の動作」としては、アバターがゴーストを基準として行う何らかの動作であればよい。例えば、アバターがゴーストに接触するような動作、アバターがゴーストを基準とした所定の範囲内に移動するような動作、アバターが仮想空間に存在するオブジェクトを操作する際に、その対象としてゴーストを選択するような動作(例えば、カメラ型オブジェクトでゴーストを撮影するような動作等)等が挙げられる。 Here, the "predetermined action" may be any action performed by the avatar based on the ghost. For example, when an avatar touches a ghost, when an avatar moves within a predetermined range based on a ghost, when an avatar operates an object in the virtual space, the ghost is selected as the target. (for example, an action of shooting a ghost with a camera-type object, etc.).
 また、本発明の仮想空間体感システムにおいては、ゴーストを生成する構成の場合、
 前記トリガーイベント認識部は、第2トリガーイベントの発生を認識し、
 前記アバター生成部は、前記第2トリガーイベントの発生が認識された際に、前記第1ユーザに対応する前記ゴーストである第1ゴーストを、前記仮想空間に生成し、
 前記アバター座標決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記第1アバターの座標を決定し、前記第1ユーザの座標に基づいて、前記仮想空間における前記第1ゴーストの座標を決定し、
 前記仮想空間画像決定部は、前記第2トリガーイベントの発生が認識された後には、前記第2ユーザに認識させる前記仮想空間の画像に、前記第1ユーザに対応している旨を示す情報を付加した前記第1ゴーストの画像を含ませ、
 前記第1トリガーイベントは、前記第2アバターが前記第1ゴーストに対して実行する所定の動作であることが好ましい。
Further, in the virtual space experience system of the present invention, in the case of a configuration that generates a ghost,
The trigger event recognition unit recognizes occurrence of a second trigger event,
The avatar generation unit generates a first ghost, which is the ghost corresponding to the first user, in the virtual space when occurrence of the second trigger event is recognized,
After the occurrence of the second trigger event is recognized, the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user. , determining the coordinates of the first ghost in the virtual space based on the coordinates of the first user;
After the occurrence of the second trigger event is recognized, the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the second user corresponds to the first user. including the image of the added first ghost;
Preferably, the first trigger event is a predetermined action performed by the second avatar on the first ghost.
 例えば、第1ユーザと第2ユーザとが現実空間の同一領域内(例えば、同じ部屋)に存在している場合等には、第1アバターの座標を第1ユーザの座標に所定のズレを生じた座標に基づいて決定していると、第2ユーザが、自らに対応する第2アバターを第1アバターと接触しないように移動したとしても、第1アバターに対応する第1ユーザと接触してしまうおそれがある。 For example, when the first user and the second user exist in the same area (for example, the same room) in the physical space, the coordinates of the first avatar are shifted from the coordinates of the first user by a predetermined amount. Even if the second user moves the second avatar corresponding to himself/herself so as not to contact the first avatar, the second user can contact the first user corresponding to the first avatar. There is a risk that it will be lost.
 そして、その接触を回避するために、第1ユーザに対応する第1ゴーストの座標を、第1ユーザの座標に基づく座標(すなわち、ズレが生じる前の座標)にするという手法が採用されることがある。これにより、第2アバター(すなわち、第2ユーザ)が、第1ゴーストを回避するように移動するようになるので、第2ユーザが第1ユーザに接触してしまうことが抑制できる。 Then, in order to avoid such contact, a method is adopted in which the coordinates of the first ghost corresponding to the first user are set to the coordinates based on the coordinates of the first user (that is, the coordinates before the deviation occurs). There is Thereby, the second avatar (that is, the second user) moves so as to avoid the first ghost, so it is possible to prevent the second user from contacting the first user.
 そして、そのような手法が採用されている場合には、このように、第1トリガーイベントとして、第2アバターが第1ゴーストに対して実行する動作を採用することが好ましい。 Then, when such a method is adopted, it is preferable to adopt the action performed by the second avatar on the first ghost as the first trigger event.
 そのように構成すると、その動作によって、第2ユーザは、第1ユーザと環境を共有する第1トリガーイベントが発生することを、直感的に理解することができる。ひいては、第1トリガーイベントを発生させるために、第2ユーザの仮想空間への没入感が阻害されてしまうことを抑制することができる。 With such a configuration, the second user can intuitively understand that the action causes the first trigger event that shares the environment with the first user. As a result, it is possible to prevent the second user's sense of immersion in the virtual space from being hindered due to the occurrence of the first trigger event.
 また、本発明の仮想空間体感システムにおいては、ゴーストを生成する構成の場合、
 前記トリガーイベント認識部は、第2トリガーイベントの発生を認識し、
 前記アバター生成部は、前記第2トリガーイベントの発生が認識された際に、前記第2ユーザに対応する前記ゴーストである第2ゴーストを、前記仮想空間に生成し、
 前記アバター座標決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記第1アバターの座標を決定し、前記第2ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記仮想空間における前記第2ゴーストの座標を決定し、
 前記仮想空間画像決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザに認識させる前記仮想空間の画像に、前記第2ユーザに対応している旨を示す情報を付加した前記第2ゴーストの画像を含ませ、
 前記第1トリガーイベントは、前記第1アバターが前記第2ゴーストに対して実行する所定の動作であることが好ましい。
Further, in the virtual space experience system of the present invention, in the case of a configuration that generates a ghost,
The trigger event recognition unit recognizes occurrence of a second trigger event,
The avatar generation unit generates a second ghost, which is the ghost corresponding to the second user, in the virtual space when occurrence of the second trigger event is recognized,
After the occurrence of the second trigger event is recognized, the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user. determining the coordinates of the second ghost in the virtual space based on the coordinates obtained by causing the predetermined deviation from the coordinates of the second user;
After the occurrence of the second trigger event is recognized, the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the first user corresponds to the second user. including the image of the added second ghost,
Preferably, the first trigger event is a predetermined action performed by the first avatar on the second ghost.
 例えば、第1ユーザと第2ユーザとが現実空間の同一領域内(例えば、同じ部屋)に存在している場合等には、第1アバターの座標を第1ユーザの座標に所定のズレを生じた座標に基づいて決定していると、第1ユーザが、自らに対応する第1アバターを第2アバターと接触しないように移動したとしても、第2アバターに対応する第2ユーザと接触してしまうおそれがある。 For example, when the first user and the second user exist in the same area (for example, the same room) in the physical space, the coordinates of the first avatar are shifted from the coordinates of the first user by a predetermined amount. Even if the first user moves the first avatar corresponding to himself/herself so as not to contact the second avatar, the second user corresponding to the second avatar is contacted. There is a risk that it will be lost.
 そして、その接触を回避するために、第2ユーザに対応する第4ゴーストの座標を、第2アバターの座標に第1アバターと同様のズレを生じさせた座標にするという手法が採用されることがある。これにより、第1アバター(すなわち、第1ユーザ)が、第4ゴーストを回避するように移動するようになるので、第1ユーザが第2ユーザに接触してしまうことが抑制できる。 Then, in order to avoid such contact, a method is adopted in which the coordinates of the fourth ghost corresponding to the second user are set to the coordinates of the second avatar with a deviation similar to that of the first avatar. There is As a result, the first avatar (that is, the first user) moves so as to avoid the fourth ghost, so it is possible to prevent the first user from contacting the second user.
 そのような手法が採用されている場合には、このように、第1トリガーイベントとして、第1アバターが第2ゴーストに対して実行する動作を採用することが好ましい。 When such a method is adopted, it is preferable to adopt the action performed by the first avatar on the second ghost as the first trigger event.
 そのように構成すると、その動作によって、第1ユーザは、第2ユーザと環境を共有する第1トリガーイベントが発生することを、直感的に理解することができる。ひいては、第1トリガーイベントを発生させるために、第1ユーザの仮想空間への没入感が阻害されてしまうことを抑制することができる。 With such a configuration, the first user can intuitively understand that the action causes the first trigger event that shares the environment with the second user. As a result, it is possible to prevent the first user's sense of immersion in the virtual space from being hindered due to the occurrence of the first trigger event.
第1実施形態に係るVRシステムの概略構成を示す模式図。1 is a schematic diagram showing a schematic configuration of a VR system according to a first embodiment; FIG. 図1のVRシステムの処理部の構成を示すブロック図。FIG. 2 is a block diagram showing the configuration of a processing unit of the VR system in FIG. 1; 図1のVRシステムが通常の使用状態において実行する処理を示すフローチャート。2 is a flowchart showing processing executed by the VR system of FIG. 1 in normal use; 図1のVRシステムの通常の使用状態における現実空間及び仮想空間の状態を示す模式図。FIG. 2 is a schematic diagram showing states of a real space and a virtual space in normal use of the VR system of FIG. 1; 図1のVRシステムが仮想空間を拡張させる際及びそれ以後において実行する処理を示すフローチャート。FIG. 2 is a flow chart showing processing executed when the VR system in FIG. 1 expands the virtual space and after that; FIG. 図1のVRシステムにおいて仮想空間を拡張させる際における現実空間及び仮想空間の状態を示す模式図。FIG. 2 is a schematic diagram showing the states of the real space and the virtual space when expanding the virtual space in the VR system of FIG. 1; 図1のVRシステムにおいて仮想空間を拡張させた後における現実空間及び仮想空間の状態を示す模式図。FIG. 2 is a schematic diagram showing the state of the real space and the virtual space after the virtual space is expanded in the VR system of FIG. 1; 図1のVRシステムが環境を同期させる際及びそれ以後において実行する第1実施形態における処理を示すフローチャート。4 is a flow chart showing processing in the first embodiment that is executed when and after the VR system of FIG. 1 synchronizes the environment; 図1のVRシステムが環境を同期させる際における現実空間及び仮想空間の状態を示す模式図。FIG. 2 is a schematic diagram showing the states of the real space and the virtual space when the VR system in FIG. 1 synchronizes the environments; 図1のVRシステムが環境を同期させた後における現実空間及び仮想空間の状態を示す模式図。FIG. 2 is a schematic diagram showing the state of the real space and the virtual space after the VR system of FIG. 1 has synchronized the environments; 第2実施形態のVRシステムが環境を同期させる際に実行する処理を示すフローチャート。10 is a flowchart showing processing executed when the VR system of the second embodiment synchronizes environments; 図11のVRシステムが環境を同期させる際における現実空間及び仮想空間の状態を示す模式図。FIG. 12 is a schematic diagram showing the states of the real space and the virtual space when the VR system in FIG. 11 synchronizes the environments; 図11のVRシステムが環境を同期させた後における現実空間及び仮想空間の状態を示す模式図。FIG. 12 is a schematic diagram showing the state of the real space and the virtual space after the VR system of FIG. 11 has synchronized the environments; 図11のVRシステムが環境を同期させる前に、第2ユーザが認識している仮想空間の画像の模式図。12 is a schematic diagram of an image of the virtual space as perceived by the second user before the VR system of FIG. 11 synchronizes the environment; FIG. 図11のVRシステムが環境を同期させた後に、第2ユーザが認識している仮想空間の画像の模式図。FIG. 12 is a schematic diagram of an image of the virtual space as perceived by the second user after the VR system of FIG. 11 has synchronized the environment;
[第1実施形態]
 以下、図1~図10を参照して、第1実施形態に係る仮想空間体感システムであるVRシステムSについて説明する。
[First embodiment]
A VR system S, which is a virtual space sensation system according to the first embodiment, will be described below with reference to FIGS. 1 to 10. FIG.
 VRシステムSは、現実空間RSの所定の領域(例えば、1つの部屋)に共に存在する第1ユーザU1及び第2ユーザU2(以下、総称する場合は「ユーザU」という。)に対し、その領域に対応する1つの仮想空間VSに、第1ユーザU1に対応する第1アバターA1及び第2ユーザU2に対応する第2アバターA2を介して、自らが共に存在すると認識させるものである(図4等参照)。 For a first user U1 and a second user U2 (hereinafter collectively referred to as “users U”) existing together in a predetermined area (for example, one room) of the physical space RS, the VR system S One virtual space VS corresponding to the area is made to recognize that they exist together through a first avatar A1 corresponding to the first user U1 and a second avatar A2 corresponding to the second user U2 (Fig. 4, etc.).
 なお、本実施形態及び後述する第2実施形態では、理解を容易にするために、ユーザは2人としている。しかし、本発明の仮想空間体感システムは、そのような構成に限定されるものではなく、ユーザの数は3人以上であってもよい。 In addition, in this embodiment and a second embodiment described later, the number of users is assumed to be two in order to facilitate understanding. However, the virtual space experience system of the present invention is not limited to such a configuration, and the number of users may be three or more.
[システムの概略構成]
 まず、図1を参照して、VRシステムSの概略構成について説明する。
[Schematic configuration of the system]
First, a schematic configuration of the VR system S will be described with reference to FIG.
 図1に示すように、VRシステムSは、現実空間RSに存在するユーザUに取り付けられる複数の標識1と、ユーザU(厳密には、ユーザUに取り付けられた標識1)を撮影するカメラ2と、仮想空間VS(図4等参照)の画像及び音声を決定するサーバ3と、決定された画像及び音声をユーザに認識させるヘッドマウントディスプレイ(以下、「HMD4」という。)とを備えている。 As shown in FIG. 1, a VR system S includes a plurality of markers 1 attached to a user U existing in a physical space RS, and a camera 2 that captures the user U (strictly speaking, the marker 1 attached to the user U). , a server 3 that determines the image and sound of the virtual space VS (see FIG. 4, etc.), and a head-mounted display (hereinafter referred to as "HMD 4") that allows the user to recognize the determined image and sound. .
 VRシステムSでは、カメラ2、サーバ3及びHMD4は、インターネット網、公衆回線、近距離無線通信等の無線で相互に情報を送受信可能となっている。ただし、それらのいずれか同士を有線で相互に情報を送受信可能に構成してもよい。 In the VR system S, the camera 2, server 3, and HMD 4 can mutually transmit and receive information wirelessly, such as through the Internet network, public lines, and short-range wireless communication. However, any one of them may be configured to transmit and receive information to and from each other by wire.
 複数の標識1は、ユーザUの装着するHMD4、手袋及び靴を介して、ユーザUの頭部、両手及び両足のそれぞれに取り付けられている。なお、複数の標識1は、後述するようにユーザUの現実空間RSにおける座標及び姿勢(ひいては、動作(例えば、座標の移動、向きを含む姿勢の変化等))を認識するために用いられるものである。そのため、VRシステムSを構成する他の機器に応じて、標識1の取り付け位置は適宜変更してよい。 A plurality of markers 1 are attached to the user U's head, both hands, and both feet via the HMD 4, gloves, and shoes that the user U wears. As will be described later, the plurality of markers 1 are used to recognize the coordinates and posture of the user U in the physical space RS (furthermore, actions (eg, movement of coordinates, changes in posture including orientation, etc.)). is. Therefore, depending on other devices that constitute the VR system S, the mounting position of the marker 1 may be changed as appropriate.
 カメラ2は、ユーザUの存在する現実空間RSのユーザUが動作可能範囲(すなわち、座標の移動、姿勢の変化等をし得る範囲)を多方向から撮影可能なように設置されている。 The camera 2 is installed so that the user U's operable range (that is, the range in which the user U can move, change the posture, etc.) in the physical space RS where the user U exists can be photographed from multiple directions.
 サーバ3は、カメラ2が撮影した画像から標識1を認識し、その認識された標識1の現実空間RSにおける位置に基づいて、ユーザUの座標及び姿勢を認識する。また、サーバ3は、その座標及び姿勢に基づいて、ユーザUに認識させる画像及び音声を決定する。 The server 3 recognizes the sign 1 from the image captured by the camera 2, and recognizes the coordinates and orientation of the user U based on the position of the recognized sign 1 in the physical space RS. Also, the server 3 determines images and sounds to be recognized by the user U based on the coordinates and orientation.
 HMD4は、ユーザUの頭部に装着される。HMD4には、ユーザUに、サーバ3によって決定された仮想空間VSの画像をユーザUの認識させるためのモニタ41(仮想空間画像表示器)と、サーバ3によって決定された仮想空間VSの音声をユーザUに認識させるためのスピーカ42(仮想空間音声発生器)とが設けられている(図2参照)。 The HMD 4 is worn on the user U's head. The HMD 4 includes a monitor 41 (virtual space image display) for allowing the user U to recognize the image of the virtual space VS determined by the server 3, and the sound of the virtual space VS determined by the server 3. A speaker 42 (virtual space audio generator) is provided for recognition by the user U (see FIG. 2).
 VRシステムSを用いてゲーム等を行う場合、ユーザUは、仮想空間VSの画像と音声のみを認識して、ユーザU自身が仮想空間に存在していると認識させられる。すなわち、VRシステムSは、いわゆる没入型のシステムとして構成されている。 When playing a game or the like using the VR system S, the user U recognizes only the images and sounds of the virtual space VS, and is made to recognize that the user U himself exists in the virtual space. That is, the VR system S is configured as a so-called immersive system.
 なお、本発明の仮想空間体感システムは、そのような没入型のシステムに限定されるものではない。そのため、例えば、現実空間の画像と仮想空間の画像とを重畳して表示して、ユーザに拡張した現実空間を認識させるシステム(いわゆるARシステム)に、本発明の仮想空間体感システムの構成を適用してもよい。 It should be noted that the virtual space experience system of the present invention is not limited to such an immersive system. Therefore, for example, the configuration of the virtual space experience system of the present invention is applied to a system (so-called AR system) that superimposes and displays an image of the real space and an image of the virtual space to allow the user to recognize the augmented real space. You may
 VRシステムSでは、ユーザUの現実空間RSにおける座標を認識するシステムとして、標識1とカメラ2とサーバ3とによって構成された、いわゆるモーションキャプチャー装置を備えている。 The VR system S is equipped with a so-called motion capture device composed of a sign 1, a camera 2, and a server 3 as a system for recognizing the coordinates of the user U in the real space RS.
 なお、本発明の仮想空間体感システムは、そのような構成に限定されるものではない。例えば、モーションキャプチャー装置を使用する場合には、上記の構成のものの他、標識及びカメラの数が上記構成とは異なる(例えば、それぞれ1つずつ設けられている)ものを用いてもよい。 It should be noted that the virtual space experience system of the present invention is not limited to such a configuration. For example, when a motion capture device is used, in addition to the configuration described above, a configuration in which the numbers of signs and cameras are different from those described above (for example, one each is provided) may be used.
 また、例えば、モーションキャプチャー装置に代わり、ユーザの座標のみを認識する装置を用いてもよい。具体的には、例えば、HMDにGPS等のセンサを搭載し、そのセンサからの出力に基づいて、ユーザの座標、姿勢等を認識するようにしてもよい。また、そのようなセンサと、上記のようなモーションキャプチャー装置を併用してもよい。 Also, for example, a device that recognizes only the user's coordinates may be used instead of the motion capture device. Specifically, for example, a sensor such as GPS may be installed in the HMD, and the user's coordinates, posture, etc. may be recognized based on the output from the sensor. Also, such a sensor may be used in combination with the motion capture device as described above.
[処理部の構成]
 次に、図2を用いて、サーバ3の備えている処理部の構成を詳細に説明する。
[Configuration of processing unit]
Next, with reference to FIG. 2, the configuration of the processing units included in the server 3 will be described in detail.
 サーバ3は、CPU、RAM、ROM、インターフェース回路等を含む1つ又は複数の電子回路ユニットにより構成されている。サーバ3は、実装されたハードウェア構成又はプログラムにより実現される機能(処理部)として、図2に示すように、表示画像生成部31と、ユーザ情報認識部32と、トリガーイベント認識部33と、アバター座標決定部34と、仮想空間画像決定部35と、仮想空間音声決定部36とを備えている。 The server 3 is composed of one or more electronic circuit units including CPU, RAM, ROM, interface circuits, and the like. As shown in FIG. 2, the server 3 includes a display image generation unit 31, a user information recognition unit 32, and a trigger event recognition unit 33 as functions (processing units) implemented by the installed hardware configuration or programs. , an avatar coordinate determination unit 34 , a virtual space image determination unit 35 , and a virtual space sound determination unit 36 .
 表示画像生成部31は、HMD4のモニタ41を介してユーザUに認識させる画像を生成する。表示画像生成部31は、仮想空間生成部31aと、アバター生成部31bと、移動体生成部31cとを有している。 The display image generation unit 31 generates an image to be recognized by the user U through the monitor 41 of the HMD 4 . The display image generator 31 has a virtual space generator 31a, an avatar generator 31b, and a moving object generator 31c.
 仮想空間生成部31aは、仮想空間VSの背景となる画像及び仮想空間VSに存在するオブジェクトの画像を生成する。 The virtual space generation unit 31a generates an image that serves as the background of the virtual space VS and an image of an object that exists in the virtual space VS.
 アバター生成部31bは、仮想空間VSに、第1ユーザU1に対応するようにして第1アバターA1、及び、第1アバターA1とは独立したアバターである第1ゴーストG1を生成する(図6等参照)。また、アバター生成部31bは、第2ユーザU2に対応するようにして、第2アバターA2、及び、第2アバターA2とは独立したアバターである第2ゴーストG2を生成する(図6等参照)。 The avatar generation unit 31b generates, in the virtual space VS, a first avatar A1 corresponding to the first user U1, and a first ghost G1, which is an avatar independent of the first avatar A1 (FIG. 6, etc.). reference). In addition, the avatar generation unit 31b generates a second avatar A2 and a second ghost G2, which is an avatar independent of the second avatar A2, corresponding to the second user U2 (see FIG. 6, etc.). .
 第1アバターA1及び第2アバターA2(以下、総称する場合は「アバターA」という。)、並びに、第1ゴーストG1及び第2ゴーストG2(以下、総称する場合は「ゴーストG」という。)は、対応するユーザUの現実空間RSにおける動作(すなわち、座標の移動及び姿勢の変化)に対応して、仮想空間VSにおいて動作する。 The first avatar A1 and second avatar A2 (hereinafter collectively referred to as "avatar A"), and the first ghost G1 and second ghost G2 (hereinafter collectively referred to as "ghost G") are , operate in the virtual space VS in response to the corresponding user U's actions in the physical space RS (that is, coordinate movement and posture change).
 移動体生成部31cは、仮想空間VSに、現実空間RSには対応する物体が存在せず、仮想空間VSにアバターと接続可能な移動体を生成する。 The moving body generation unit 31c generates a moving body in the virtual space VS that has no corresponding object in the real space RS and that can be connected to the avatar in the virtual space VS.
 ここで、「移動体」とは、アバターが接続した際に、ユーザの現実の移動とは異なるアバターの移動を、(意識している、意識していないに関わらず)ユーザUに予測させるものであればよい。 Here, the “moving object” is an object that allows the user U to predict (whether consciously or unconsciously) the movement of the avatar that differs from the actual movement of the user when the avatar connects. If it is
 例えば、移動体としては、エレベータ等の現実空間で移動に使われるものの他、飛び乗ることが可能な川を流れる丸太、上に立った際に崩れそうな床、ジャンプ台、跳躍を補助する翼等が該当する。また、移動体としては、仮想空間の地面、壁面に描画された文字、模様等も該当する。 For example, moving objects include objects used for movement in real space such as elevators, logs flowing in a river that can be jumped on, floors that are likely to collapse when standing on them, jumping platforms, wings that assist jumping, etc. is applicable. In addition, characters, patterns, and the like drawn on the ground and wall surfaces of the virtual space also correspond to moving objects.
 また、ここで、アバターと移動体との「接続」とは、移動体の移動、移動体の形状の変化等がアバターの座標に影響することを、ユーザが予測し得る状態になることをいう。 Here, the "connection" between the avatar and the moving object means that the user can predict that the movement of the moving object, the change in the shape of the moving object, etc. will affect the coordinates of the avatar. .
 例えば、アバターがエレベータに乗り込む、アバターが川を流れる丸太に乗る、アバターが崩れそうな床の上に立つ、ジャンプ台の上に立つ、跳躍を補助する翼をアバターが装着する等が、接続に該当する。また、仮想空間の地面、壁面に描画された文字、模様等に、アバターが接触、近接すること等も、接続に該当する。 For example, an avatar getting into an elevator, an avatar riding on a log flowing in a river, an avatar standing on a crumbling floor, an avatar standing on a jumping platform, an avatar wearing wings that assist jumping, etc. Applicable. In addition, when an avatar touches or approaches characters, patterns, or the like drawn on the ground or wall surface of the virtual space, this also corresponds to connection.
 ユーザ情報認識部32には、カメラ2が撮影した標識1を含むユーザUの画像データが入力される。このユーザ情報認識部32は、ユーザ姿勢認識部32aと、ユーザ座標認識部32bとを有している。 Image data of the user U including the sign 1 captured by the camera 2 is input to the user information recognition unit 32 . The user information recognition section 32 has a user posture recognition section 32a and a user coordinate recognition section 32b.
 ユーザ姿勢認識部32aは、入力されたユーザUの画像データから標識1を抽出し、その抽出結果に基づいて、ユーザUの姿勢を認識する。 The user posture recognition unit 32a extracts the marker 1 from the input image data of the user U, and recognizes the posture of the user U based on the extraction result.
 ユーザ座標認識部32bは、入力されたユーザUの画像データから標識1を抽出し、その抽出結果に基づいて、ユーザUの座標を認識する。 The user coordinate recognition unit 32b extracts the marker 1 from the input image data of the user U, and recognizes the coordinates of the user U based on the extraction result.
 トリガーイベント認識部33は、システム設計者が予め定めた条件を満たした際に、所定のトリガーイベントが発生したこと、又は、トリガーイベントが解除されたことを認識する。 The trigger event recognition unit 33 recognizes that a predetermined trigger event has occurred or that the trigger event has been canceled when a condition predetermined by the system designer is satisfied.
 ここで、トリガーイベントは、ユーザがその発生を認識していないものであってもよい。そのため、トリガーイベントとしては、例えば、ユーザが現実空間において所定の動作を行うこと(すなわち、ユーザに対応するアバターが仮想空間において所定の動作を行うこと)等のユーザの動作に起因するものも該当するし、所定時間の経過といったユーザの動作に起因しないものも該当する。 Here, the trigger event may be one that the user is unaware of. Therefore, as a trigger event, for example, events caused by user actions such as the user performing a predetermined action in the real space (that is, the avatar corresponding to the user performing a predetermined action in the virtual space) are also applicable. In addition, it also applies to events that are not caused by the user's actions, such as the passage of a predetermined period of time.
 アバター座標決定部34は、ユーザ座標認識部32bによって認識されたユーザUの現実空間RSにおける座標に基づいて、そのユーザUに対応するアバターの仮想空間VSにおける座標を決定する。 The avatar coordinate determination unit 34 determines the coordinates of the avatar corresponding to the user U in the virtual space VS based on the coordinates of the user U in the physical space RS recognized by the user coordinate recognition unit 32b.
 また、アバター座標決定部34は、トリガーイベント認識部33が所定のトリガーイベントの発生を認識した際には、所定期間若しくは所定範囲、又は、所定期間及び所定範囲において、ユーザの座標に関わらず、そのユーザに対応するアバターの座標を移動させる。 In addition, when the trigger event recognition unit 33 recognizes the occurrence of a predetermined trigger event, the avatar coordinate determination unit 34, regardless of the coordinates of the user, during a predetermined period or within a predetermined range, or within a predetermined period and within a predetermined range, Move the coordinates of the avatar corresponding to the user.
 また、アバター座標決定部34は、トリガーイベント認識部33が所定のトリガーイベントの発生を認識した際には、所定期間若しくは所定範囲、又は、所定期間及び所定範囲において、ユーザUに対応するアバターAの座標に所定のズレを生じさせた座標に基づいて、ゴーストGの座標を決定する。 Further, when the trigger event recognition unit 33 recognizes the occurrence of a predetermined trigger event, the avatar coordinate determination unit 34 sets the avatar A corresponding to the user U for a predetermined period, a predetermined range, or a predetermined period and a predetermined range. The coordinates of the ghost G are determined based on the coordinates obtained by shifting the coordinates of .
 仮想空間画像決定部35は、アバターの座標に基づいて、HMD4のモニタ41を介して、そのアバターに対応するユーザUに認識させる仮想空間の画像を決定する。 Based on the coordinates of the avatar, the virtual space image determination unit 35 determines the image of the virtual space to be recognized by the user U corresponding to the avatar via the monitor 41 of the HMD 4.
 ここで、「仮想空間の画像」には、仮想空間の背景の画像の他、他のアバターの画像、ゴーストの画像、仮想空間にのみ存在するオブジェクトの画像、現実空間に対応して仮想空間に存在するオブジェクトの画像等が含まれる。 Here, in addition to the image of the background of the virtual space, the "image of the virtual space" includes images of other avatars, images of ghosts, images of objects that exist only in the virtual space, and images in the virtual space corresponding to the real space. Images of existing objects, etc. are included.
 なお、本実施形態におけるゴーストGの画像には、そのゴーストGに対応するユーザUに対応している旨を示す情報が付加されている。 It should be noted that the image of the ghost G in this embodiment is added with information indicating that it corresponds to the user U corresponding to the ghost G.
 ここで、ユーザに「対応している旨を示す情報」とは、例えば、常時又はユーザの要求に応じて表示されるメッセージといった直接的な情報の他、対応するアバターの形状を半透明にしたものをゴーストの形状として採用することによって示される間接的な情報も含む。 Here, the ``information indicating that it is compatible'' to the user means, for example, direct information such as a message that is displayed at all times or at the request of the user, as well as a translucent shape of the corresponding avatar. It also includes indirect information indicated by adopting things as ghost shapes.
 仮想空間音声決定部36は、アバターの座標に基づいて、HMD4のスピーカ42を介して、そのアバターに対応するユーザUに認識させる音声を決定する。 Based on the coordinates of the avatar, the virtual space audio determining unit 36 determines the audio to be recognized by the user U corresponding to the avatar via the speaker 42 of the HMD 4.
 なお、本発明の仮想空間体感システムを構成する各処理部は、上記のような構成に限定されるものではない。 It should be noted that each processing unit that constitutes the virtual space experience system of the present invention is not limited to the configuration as described above.
 例えば、上記実施形態においてサーバ3に設けられている処理部の一部を、HMD4に設けてもよい。また、複数のサーバを用いて構成してもよいし、サーバを省略してHMDに搭載されているCPUを協働させて構成してもよい。また、HMDに搭載されているスピーカ以外のスピーカを設けてもよい。また、視覚及び聴覚へ影響を与えるデバイスの他、仮想空間に応じた匂い、風等を生じさせるような嗅覚及び触覚へ影響を与えるデバイスを含めてもよい。 For example, part of the processing unit provided in the server 3 in the above embodiment may be provided in the HMD 4. Alternatively, a plurality of servers may be used, or the servers may be omitted and the CPUs mounted on the HMDs may cooperate with each other. Also, a speaker other than the speaker mounted on the HMD may be provided. In addition to devices that affect sight and hearing, devices that affect smell and touch such as generating smells, wind, etc. according to the virtual space may also be included.
[実行される処理]
 次に、図2~図10を参照して、VRシステムSを用いてユーザUに仮想空間VSを体感させる際に、VRシステムSの実行する処理について説明する。
[Process to be executed]
Next, with reference to FIGS. 2 to 10, processing executed by the VR system S when the user U is allowed to experience the virtual space VS using the VR system S will be described.
[通常の使用状態における処理]
 まず、図2、図3及び図4を参照して、VRシステムSが通常の使用状態(すなわち、後述するトリガーイベントの発生を認識していない状態で)実行する処理について説明する。
[Processing during normal use]
First, with reference to FIGS. 2, 3, and 4, the processing executed by the VR system S in normal use (that is, in a state in which the occurrence of a trigger event, which will be described later) is not recognized, will be described.
 この処理においては、まず、サーバ3の表示画像生成部31は、仮想空間VS、第1アバターA1及び第2アバターA2、並びに、移動体を生成する(図3/STEP100)。 In this process, first, the display image generation unit 31 of the server 3 generates the virtual space VS, the first avatar A1 and the second avatar A2, and the moving object (FIG. 3/STEP 100).
 具体的には、表示画像生成部31の仮想空間生成部31aは、仮想空間VS及び仮想空間VSに存在する各種オブジェクトを生成する。また、表示画像生成部31のアバター生成部31bは、第1ユーザU1に対応する第1アバターA1及び第2ユーザU2に対応する第2アバターA2を生成する。また、表示画像生成部31の移動体生成部31cは、後述するエレベータVS1等の移動体を生成する。 Specifically, the virtual space generation unit 31a of the display image generation unit 31 generates the virtual space VS and various objects existing in the virtual space VS. Also, the avatar generation unit 31b of the display image generation unit 31 generates a first avatar A1 corresponding to the first user U1 and a second avatar A2 corresponding to the second user U2. Further, the moving object generation unit 31c of the display image generation unit 31 generates a moving object such as an elevator VS1, which will be described later.
 このSTEP100の処理により生成された仮想空間VSには、図4に示すように、第1アバターA1及び第2アバターA2、並びに、移動体であるエレベータVS1の他、現実空間RSに設置されているホワイトボードRS1に対応する位置に生成されたスイッチVS2等のトリガーイベントに関連するオブジェクトが設置されている。 In the virtual space VS generated by the processing of STEP 100, as shown in FIG. 4, there are a first avatar A1, a second avatar A2, an elevator VS1 which is a moving body, and a real space RS. An object related to a trigger event such as a switch VS2 generated at a position corresponding to the whiteboard RS1 is placed.
 次に、サーバ3のアバター座標決定部34は、第1ユーザU1の現実空間RSにおける座標及び姿勢に基づいて、第1アバターA1の仮想空間VSにおける座標及び姿勢を決定し、第2ユーザU2の現実空間RSにおける座標及び姿勢に基づいて、第2アバターA2の仮想空間VSにおける座標及び姿勢を決定する(図3/STEP101)。 Next, the avatar coordinate determination unit 34 of the server 3 determines the coordinates and posture of the first avatar A1 in the virtual space VS based on the coordinates and posture of the first user U1 in the physical space RS, and The coordinates and orientation of the second avatar A2 in the virtual space VS are determined based on the coordinates and orientation in the physical space RS (FIG. 3/STEP 101).
 このSTEP101以降の処理における第1ユーザU1及び第2ユーザU2の座標及び姿勢は、カメラ2によって撮影された画像データに基づいてサーバ3のユーザ情報認識部32によって認識された座標及び姿勢が用いられる。 The coordinates and orientation of the first user U1 and the second user U2 in the processing after STEP 101 are the coordinates and orientation recognized by the user information recognition unit 32 of the server 3 based on the image data captured by the camera 2. .
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1に認識させる画像及び音声を、第1アバターA1の仮想空間VSにおける座標及び姿勢に基づいて決定し、第2ユーザU2に認識させる画像及び音声を、第2アバターA2の仮想空間VSにおける座標及び姿勢に基づいて決定(図3/STEP102)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. , the image and sound to be recognized by the second user U2 are determined based on the coordinates and posture of the second avatar A2 in the virtual space VS (FIG. 3/STEP 102).
 次に、ユーザUの装着しているHMD4は、HMD4に搭載されているモニタ41に決定された画像を表示させ、HMD4に搭載されているスピーカ42に決定された音声を発生させる(図3/STEP103)。 Next, the HMD 4 worn by the user U causes the monitor 41 mounted on the HMD 4 to display the determined image, and the speaker 42 mounted on the HMD 4 to generate the determined sound (FIG. 3/ STEP 103).
 次に、サーバ3のユーザ情報認識部32は、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されたか否かを判断する(図3/STEP104)。 Next, the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 3/STEP 104).
 第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識された場合(STEP104でYESの場合)、STEP101に戻り、再度、STEP101以降の処理が実行される。 If the movement of the coordinates or the change in the posture of the first user U1 or the second user U2 in the physical space RS is recognized (YES in STEP104), the process returns to STEP101, and the processes after STEP101 are executed again.
 一方、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されなかった場合(STEP104でNOの場合)、サーバ3は、処理の終了を指示する信号を認識したか否かを判断する(図3/STEP105)。 On the other hand, if the movement of the coordinates or the change in the posture of the first user U1 or the second user U2 in the physical space RS is not recognized (NO in STEP 104), the server 3 recognizes the signal instructing the end of the process. (FIG. 3/STEP 105).
 終了を指示する信号を認識できなかった場合(STEP105でNOの場合)、STEP104に戻り、再度、STEP104以降の処理が実行される。 If the signal instructing the end could not be recognized (NO in STEP 105), the process returns to STEP 104, and the processes after STEP 104 are executed again.
 一方、終了を指示する信号を認識した場合(STEP105でYESの場合)、VRシステムSは、今回の処理を終了する。 On the other hand, if the signal instructing the end is recognized (YES in STEP 105), the VR system S ends this process.
 以上の処理により、仮想空間VSでは、第1ユーザU1に対応する第1アバターA1及び第2ユーザU2に対応する第2アバターA2と、移動体であるエレベータVS1、後述するトリガーイベントの発生に関連するスイッチVS2等を含む複数のオブジェクトが設置される。 By the above processing, in the virtual space VS, the first avatar A1 corresponding to the first user U1, the second avatar A2 corresponding to the second user U2, the elevator VS1 which is a moving body, and the A plurality of objects are placed including a switch VS2, etc., to be set.
 そして、第1ユーザU1及び第2ユーザU2は、それぞれが装着したHMD4で表示される画像及び発生される音声によって、それぞれに対応する第1アバターA1及び第2アバターA2を介して、自身が仮想空間VSに存在し、自由に動作することができると認識するようになる。 Then, the first user U1 and the second user U2 can visualize themselves via the corresponding first avatar A1 and second avatar A2 by the image displayed and the sound generated by the HMD 4 they wear. It comes to recognize that it exists in space VS and can move freely.
[仮想空間を拡張させる際及びそれ以後における処理]
 次に、図2、図4~図7を参照して、仮想空間VSをその仮想空間VSに対応する現実空間RSよりも広くなるように拡張させる際及びそれ以後において、VRシステムSが実行する処理について説明する。
[Processing when expanding the virtual space and after that]
Next, referring to FIGS. 2 and 4 to 7, the VR system S executes Processing will be explained.
 ここで、具体的な処理の内容の説明に先立って、仮想空間VSをその仮想空間VSに対応する現実空間RSよりも広くなるように拡張させる処理、及び、それに伴う接触回避のための処理の概要を説明する。 Here, prior to the explanation of the specific contents of the processing, the processing for expanding the virtual space VS so as to be wider than the real space RS corresponding to the virtual space VS, and the accompanying processing for avoiding contact. Give an overview.
 VRシステムSは、所定のトリガーイベント(本発明における第2トリガーイベント)が発生した際に、第1ユーザU1の座標と第1アバターA1の座標との対応関係、又は、第2ユーザU2と第2アバターA2との対応関係に、ズレを生じさせる処理を行うように構成されている。 When a predetermined trigger event (second trigger event in the present invention) occurs, the VR system S sets the correspondence relationship between the coordinates of the first user U1 and the coordinates of the first avatar A1, or the relationship between the coordinates of the first user U2 and the first avatar A1. 2 avatar A2 is configured to perform a process of causing a deviation in the correspondence relationship.
 本実施形態では、例えば、図4に示すように、第1アバターA1が移動体であるエレベータVS1に乗っている状態(接続されている状態)で、図6に示すように、第2アバターA2がスイッチVS2を押すことが、そのトリガーイベントとして設定されている。 In this embodiment, for example, as shown in FIG. 4, when the first avatar A1 is riding (connected to) the elevator VS1, which is a moving object, the second avatar A2 pushing the switch VS2 is set as the trigger event.
 そして、そのトリガーイベントの発生が認識された際には、図4及び図6に示すように、第1ユーザU1の座標の移動に関わらず、第1ユーザU1に対応する第1アバターA1の座標は、上方向へと移動する。具体的には、第1アバターA1は、エレベータVS1によって、仮想空間VSに規定された1階部分である第1フロアF1から、2階部分である第2フロアF2へと移動する。 Then, when the occurrence of the trigger event is recognized, as shown in FIGS. 4 and 6, the coordinates of the first avatar A1 corresponding to the first user U1 are displayed regardless of the movement of the coordinates of the first user U1. moves upwards. Specifically, the first avatar A1 moves from the first floor F1 defined in the virtual space VS to the second floor F2 by the elevator VS1.
 これにより、VRシステムSでは、対応する第1アバターA1が移動させられた第1ユーザU1、及び、第1アバターA1を認識している第2ユーザU2に、現実空間RSよりも上下方向に拡張された仮想空間VSを認識させることができるようになっている。 As a result, in the VR system S, the first user U1 to whom the corresponding first avatar A1 has been moved and the second user U2 who recognizes the first avatar A1 are extended in the vertical direction from the physical space RS. The virtual space VS created can be recognized.
 ここで、そのようなズレを生じさせた場合、第1ユーザU1と第2ユーザU2とは、現実空間RSにおける互いの位置関係を適切に把握することができなくなるおそれがある。その結果、意図せず第1ユーザU1と第2ユーザU2とが接触してしまうおそれがあった。 Here, if such a deviation is caused, the first user U1 and the second user U2 may not be able to properly grasp their mutual positional relationship in the physical space RS. As a result, there is a possibility that the first user U1 and the second user U2 may come into contact with each other unintentionally.
 例えば、図6に示す状態においては、第2ユーザU2は、第1アバターA1と同様に第1ユーザU1が現実空間RSでも上方に移動していると誤解してしまう場合がある。そして、図7に示すように、第2ユーザU2は、仮想空間VSで自らに対応する第2アバターA2をエレベータVS1の下方に移動可能であると誤解し、第2アバターA2をエレベータVS1の下方に移動させるように、自らが移動しようとしてしまう場合がある。 For example, in the state shown in FIG. 6, the second user U2 may mistakenly think that the first user U1 is moving upward in the real space RS as well as the first avatar A1. Then, as shown in FIG. 7, the second user U2 misunderstands that the second avatar A2 corresponding to him/herself in the virtual space VS can move below the elevator VS1, and moves the second avatar A2 below the elevator VS1. In some cases, they try to move themselves, as if they were moving to another area.
 しかし、現実空間RSでは、実際には第1バターA1に対応する第1ユーザU1が第2ユーザU2と同じ高さに存在している。そのため、その場合には、現実空間RSにおいて、第1ユーザU1と第2ユーザU2とが接触してしまうことになる。その結果、第1ユーザU1及び第2ユーザU2の仮想空間VSへの没入感が阻害されてしまうおそれがあった。 However, in the physical space RS, the first user U1 corresponding to the first butter A1 actually exists at the same height as the second user U2. Therefore, in that case, the first user U1 and the second user U2 come into contact in the physical space RS. As a result, there is a possibility that the feeling of immersion in the virtual space VS of the first user U1 and the second user U2 may be hindered.
 そこで、VRシステムSでは、所定のトリガーイベントの発生が認識された際には、以下に説明するように、仮想空間VSをその仮想空間VSに対応する現実空間RSよりも広くなるように拡張させる処理を実行するとともに、接触回避のための処理を実行することにより、意図しない第1ユーザU1と第2ユーザU2との接触を防止している。 Therefore, in the VR system S, when the occurrence of a predetermined trigger event is recognized, the virtual space VS is expanded so as to be wider than the real space RS corresponding to the virtual space VS, as described below. Unintended contact between the first user U1 and the second user U2 is prevented by executing the process and the process for contact avoidance.
 この処理においては、まず、サーバ3のトリガーイベント認識部33は、トリガーイベントの発生を認識したか否かを判断する(図5/STEP200)。 In this process, first, the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 5/STEP 200).
 具体的には、まず、トリガーイベント認識部33は、図4に示すように、仮想空間VSで、第1ユーザU1に対応する第1アバターA1の座標が移動体であるエレベータVS1に乗るように、現実空間RSで、第1ユーザU1の座標が移動したか否かを判断する。 Specifically, first, as shown in FIG. 4, the trigger event recognizing unit 33 sets the coordinates of the first avatar A1 corresponding to the first user U1 in the virtual space VS so that the first user U1 gets on the elevator VS1, which is a moving body. , in the physical space RS, it is determined whether or not the coordinates of the first user U1 have moved.
 その後、トリガーイベント認識部33は、さらに、仮想空間VSで、第2ユーザU2に対応する第2アバターA2がスイッチVS2近傍の位置でそのスイッチVS2に触れる姿勢となるように、現実空間RSで、第2ユーザU2が移動して所定の姿勢となったか否かを判断する(図6参照)。 After that, the trigger event recognition unit 33 further causes the second avatar A2 corresponding to the second user U2 to touch the switch VS2 at a position near the switch VS2 in the virtual space VS. It is determined whether or not the second user U2 has moved to a predetermined posture (see FIG. 6).
 そして、第1アバターA1がエレベータVS1に乗った状態で、第2アバターA2がスイッチVSに触れる姿勢となったときには、トリガーイベント認識部33は、トリガーイベントが発生したと認識する。 Then, when the first avatar A1 is on the elevator VS1 and the second avatar A2 is in a posture of touching the switch VS, the trigger event recognition unit 33 recognizes that a trigger event has occurred.
 トリガーイベントの発生を認識しなかった場合(STEP200でNOの場合)、STEP200に戻り、再度、判断が実行される。この処理は、所定の制御周期で繰り返される。 If the occurrence of the trigger event is not recognized (NO in STEP 200), the process returns to STEP 200 and the judgment is executed again. This process is repeated at a predetermined control cycle.
 一方、トリガーイベントの発生を認識した場合(STEP200でYESの場合)、サーバ3のアバター座標決定部34は、第1アバターA1の座標を移動させる(図5/STEP201)。 On the other hand, when the occurrence of the trigger event is recognized (YES in STEP200), the avatar coordinate determination unit 34 of the server 3 moves the coordinates of the first avatar A1 (FIG. 5/STEP201).
 具体的には、アバター座標決定部34は、認識されたトリガーイベントの種類に応じて予め定められているズレの内容(すなわち、補正方向及び補正量)に応じて、第1ユーザU1の座標に基づく第1アバターA1の座標を移動させる。この移動は、第1ユーザU1の座標の移動に基づく第1アバターA1の座標の移動とは、独立して行われる。これにより、第1ユーザU1の座標と第1アバターA1の座標との対応関係にズレが生じる。 Specifically, the avatar coordinate determination unit 34 determines the coordinates of the first user U1 according to the content of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event. Move the coordinates of the first avatar A1 based on. This movement is performed independently of the movement of the coordinates of the first avatar A1 based on the movement of the coordinates of the first user U1. As a result, the correspondence relationship between the coordinates of the first user U1 and the coordinates of the first avatar A1 is deviated.
 本実施形態においては、図4の状態から図6に示す状態になるように、第1アバターA1が、エレベータVS1と一体的に、第1フロアF1から第2フロアF2まで上方に向かって移動する。 In this embodiment, the first avatar A1 moves upward from the first floor F1 to the second floor F2 integrally with the elevator VS1 so that the state shown in FIG. 4 changes to the state shown in FIG. .
 次に、サーバ3の表示画像生成部31は、第1ゴーストG1及び第2ゴーストG2を生成する(図5/STEP202)。 Next, the display image generator 31 of the server 3 generates the first ghost G1 and the second ghost G2 (FIG. 5/STEP 202).
 具体的には、表示画像生成部31のアバター生成部31bは、トリガーイベントの発生をトリガーとして、仮想空間VSに、第1ユーザU1に対応する第1ゴーストG1を生成するとともに、第2ユーザU2に対応する第2ゴーストG2を生成する。 Specifically, the avatar generation unit 31b of the display image generation unit 31 generates a first ghost G1 corresponding to the first user U1 in the virtual space VS using the occurrence of a trigger event as a trigger, and also generates a first ghost G1 corresponding to the first user U2. generates a second ghost G2 corresponding to .
 本実施形態においては、図6に示すように、第1ゴーストG1は、第1ユーザU1に対応している旨を示すために、第1アバターA1と同じ形状の半透明のアバターとして構成される。また、第1ゴーストG1には、後述する第1情報ボードG1aが付加される。同様に、第2ゴーストG2は、第2ユーザU2に対応している旨を示すために、第2アバターA2と同じ形状の半透明のアバターとして構成され、また、第2ゴーストG2には、後述する第2情報ボードG2aが付加される。 In this embodiment, as shown in FIG. 6, the first ghost G1 is configured as a translucent avatar having the same shape as the first avatar A1 in order to indicate that it corresponds to the first user U1. . A first information board G1a, which will be described later, is added to the first ghost G1. Similarly, the second ghost G2 is configured as a translucent avatar having the same shape as the second avatar A2 in order to indicate that it corresponds to the second user U2. A second information board G2a is added.
 また、本実施形態においては、第1ゴーストG1の生成される座標は、STEP201の処理が実行される直前における第1アバターA1の座標と一致するように設定されている。また、第2ゴーストG2の生成される座標は、第2ユーザU2の座標とは独立した座標となるように設定されている。 Also, in this embodiment, the coordinates of the generated first ghost G1 are set to match the coordinates of the first avatar A1 immediately before the process of STEP201 is executed. Also, the generated coordinates of the second ghost G2 are set so as to be independent of the coordinates of the second user U2.
 次に、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第1ユーザU1の現実空間RSにおける座標及び姿勢に基づいて、第1アバターA1及び第1ゴーストG1の仮想空間VSにおける座標及び姿勢を決定する(図5/STEP203)。 Next, the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the first avatar A1 and the first ghost G1 based on the coordinates and posture of the first user U1 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 5/STEP 203).
 具体的には、アバター座標決定部34は、認識されたトリガーイベントの種類に応じて予め定められているズレの内容(すなわち、補正方向及び補正量)に応じて、トリガーイベントの発生が認識される時点の前までの処理(STEP201の処理が実行される前までの処理)によって決定された第1アバターA1の座標及び姿勢を補正して、トリガーイベントの発生が認識された時点以後における第1アバターA1の座標及び姿勢を決定する。 Specifically, the avatar coordinate determination unit 34 recognizes the occurrence of the trigger event according to the content of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event. By correcting the coordinates and orientation of the first avatar A1 determined by the processing up to the time when the trigger event occurs (the processing before the processing of STEP 201 is executed), the first avatar A1 after the occurrence of the trigger event is recognized. Determine the coordinates and posture of avatar A1.
 本実施形態においては、アバター座標決定部34は、第1ユーザU1の座標に基づく第1アバターA1の座標を、第1フロアF1に対する第2フロアF2の高さの分だけ上方向に移動させた座標を、トリガーイベントの発生が認識された時点以後における第1アバターA1の座標として決定する。また、アバター座標決定部34は、第1ユーザU1の姿勢に基づく第1アバターA1の姿勢を、そのまま、トリガーイベントの発生が認識された時点以後における第1アバターA1の姿勢として決定する。 In this embodiment, the avatar coordinate determination unit 34 moves the coordinates of the first avatar A1 based on the coordinates of the first user U1 upward by the height of the second floor F2 relative to the first floor F1. The coordinates are determined as the coordinates of the first avatar A1 after the occurrence of the triggering event is recognized. Also, the avatar coordinate determining unit 34 determines the posture of the first avatar A1 based on the posture of the first user U1 as the posture of the first avatar A1 after the occurrence of the trigger event is recognized.
 また、アバター座標決定部34は、トリガーイベントの発生が認識される時点の前までに第1アバターA1の座標及び姿勢を決定していた処理と同様の処理を用いて、第1ユーザU1の座標及び姿勢に基づいて、第1ゴーストG1の座標及び姿勢を決定する。 In addition, the avatar coordinate determining unit 34 uses the same process as the process for determining the coordinates and orientation of the first avatar A1 before the occurrence of the trigger event is recognized to determine the coordinates of the first user U1. and the orientation, determine the coordinates and orientation of the first ghost G1.
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1に認識させる画像及び音声を、第1アバターA1の仮想空間VSにおける座標及び姿勢に基づいて決定する(図5/STEP204)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 5/STEP 204).
 次に、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第2ユーザU2の現実空間RSにおける座標及び姿勢に基づいて、第2アバターA2及び第2ゴーストG2の仮想空間VSにおける座標及び姿勢を決定する(図5/STEP205)。 Next, the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the second avatar A2 and the second ghost G2 based on the coordinates and posture of the second user U2 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 5/STEP 205).
 具体的には、アバター座標決定部34は、これまでに第2アバターA2の座標及び姿勢を決定していた処理と同様の処理を用いて、第2ユーザU2の座標及び姿勢に基づいて、第2アバターA2の座標及び姿勢を決定する。 Specifically, the avatar coordinate determination unit 34 uses the same processing as that for determining the coordinates and posture of the second avatar A2 so far, based on the coordinates and posture of the second user U2, to determine the 2 Determine the coordinates and posture of avatar A2.
 また、アバター座標決定部34は、認識されたトリガーイベントの種類に応じて予め定められているズレの内容(すなわち、補正方向及び補正量)に応じて、トリガーイベントの発生が認識される時点の前までの処理によって決定された第2アバターA2の座標及び姿勢を補正して、第2ゴーストG2の座標及び姿勢を決定する。 In addition, the avatar coordinate determination unit 34 determines the timing at which the occurrence of the trigger event is recognized, according to the details of the deviation (that is, correction direction and correction amount) predetermined according to the type of the recognized trigger event. The coordinates and orientation of the second avatar A2 determined by the previous processing are corrected to determine the coordinates and orientation of the second ghost G2.
 本実施形態においては、アバター座標決定部34は、第2ユーザU2の座標に基づく第2アバターA2の座標を、第1フロアF1に対する第2フロアF2の高さの分だけ上方向に移動させたものを、トリガーイベントの発生が認識された時点以後における第2ゴーストG2の座標として決定する。 In this embodiment, the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 based on the coordinates of the second user U2 upward by the height of the second floor F2 relative to the first floor F1. is determined as the coordinates of the second ghost G2 after the time when the occurrence of the trigger event was recognized.
 また、アバター座標決定部34は、第2ユーザU2の姿勢に基づく第2アバターA2の姿勢を、そのまま、トリガーイベントの発生が認識された時点以後における第2ゴーストG2の姿勢として決定する。 Also, the avatar coordinate determination unit 34 determines the posture of the second avatar A2 based on the posture of the second user U2 as it is as the posture of the second ghost G2 after the occurrence of the trigger event is recognized.
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第2ユーザU2に認識させる画像及び音声を、第2アバターA2の仮想空間VSにおける座標及び姿勢に基づいて決定する(図5/STEP206)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the second user U2 based on the coordinates and posture of the second avatar A2 in the virtual space VS. (FIG. 5/STEP 206).
 次に、ユーザUの装着しているHMD4が、モニタ41に決定された画像を表示し、スピーカ42に音声を発生する(図5/STEP207)。 Next, the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 5/STEP 207).
 次に、サーバ3のユーザ情報認識部32は、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されたか否かを判断する(図5/STEP208)。 Next, the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 5/STEP 208).
 第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識された場合(STEP208でYESの場合)、STEP203に戻り、再度、STEP203以降の処理が実行される。 If the movement of the coordinates or the change in the posture of the first user U1 or the second user U2 in the physical space RS is recognized (YES in STEP208), the process returns to STEP203, and the processes after STEP203 are executed again.
 一方、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されなかった場合(STEP208でNOの場合)、サーバ3は、処理の終了を指示する信号を認識したか否かを判断する(図5/STEP209)。 On the other hand, if no coordinate movement or posture change in the physical space RS of the first user U1 or the second user U2 is recognized (NO in STEP 208), the server 3 recognizes a signal instructing the end of processing. It is determined whether or not it has been done (FIG. 5/STEP 209).
 終了を指示する信号を認識できなかった場合(STEP209でNOの場合)、STEP208に戻り、再度、STEP208以降の処理が実行される。 If the signal instructing the end could not be recognized (NO in STEP 209), the process returns to STEP 208, and the processes after STEP 208 are executed again.
 一方、終了を指示する信号を認識した場合(STEP209でYESの場合)、VRシステムSは、今回の処理を終了する。 On the other hand, when the signal instructing the end is recognized (YES in STEP209), the VR system S ends the current process.
 以上説明したように、VRシステムSでは、トリガーイベントが発生し、第1ユーザU1の座標と第1ユーザU1に対応する第1アバターA1の座標との対応関係にズレが生じた場合には、第1ユーザU1の座標に対応する第1ゴーストG1が生成され、第2ユーザU2の座標に対応する座標を、そのズレに対応するように補正した座標に第2ゴーストG2が生成される。 As described above, in the VR system S, when a trigger event occurs and a deviation occurs in the correspondence between the coordinates of the first user U1 and the coordinates of the first avatar A1 corresponding to the first user U1, A first ghost G1 corresponding to the coordinates of the first user U1 is generated, and a second ghost G2 is generated at coordinates obtained by correcting the coordinates corresponding to the coordinates of the second user U2 so as to correspond to the deviation.
 そして、第1ユーザU1の認識する仮想空間VSの画像及び音声に、その第2ゴーストG2の画像及び音声が含まれるようになる。また、第2ユーザU2の認識する仮想空間VSの画像及び音声に、その第1ゴーストG1の画像及び音声が含まれるようになる。 Then, the image and sound of the virtual space VS recognized by the first user U1 include the image and sound of the second ghost G2. Also, the image and sound of the virtual space VS recognized by the second user U2 include the image and sound of the first ghost G1.
 そのため、トリガーイベントの発生が認識された後には、第1ユーザU1は、何らかの動作を行う際に、自然に自らの補正した座標に対応する第1アバターA1と、第2プレイヤーの補正した座標に対応する第2ゴーストG2との接触を回避しつつ動作する。また、第2ユーザU2は、何らかの動作を行う際に、自然に自らの座標に対応する第2アバターA2と、第1ユーザU1の座標に対応する第1ゴーストG1との接触を回避しつつ動作する。 Therefore, after the occurrence of the trigger event is recognized, the first user U1 naturally moves to the first avatar A1 corresponding to the corrected coordinates of himself and the corrected coordinates of the second player when performing some action. It operates while avoiding contact with the corresponding second ghost G2. In addition, when the second user U2 performs some action, the second user U2 naturally avoids contact between the second avatar A2 corresponding to his coordinates and the first ghost G1 corresponding to the coordinates of the first user U1. do.
 これにより、現実空間RSにおける第1ユーザU1と第2ユーザU2との接触を、防止することができる。 Thereby, contact between the first user U1 and the second user U2 in the physical space RS can be prevented.
 なお、本実施形態においては、第1ゴーストG1を第1アバターA1と同じ形状の半透明のアバターとして構成し、第2ゴーストG2を第2アバターA2と同じ形状の半透明のアバターとして構成している。これは、第1ゴーストG1が第1ユーザU1に対応している旨、及び、第2ゴーストG2が第2ユーザU2に対応している旨を示すためである。 In this embodiment, the first ghost G1 is configured as a translucent avatar having the same shape as the first avatar A1, and the second ghost G2 is configured as a translucent avatar having the same shape as the second avatar A2. there is This is to indicate that the first ghost G1 corresponds to the first user U1 and that the second ghost G2 corresponds to the second user U2.
 また、本実施形態においては、第1ゴーストG1に、第1情報ボードG1aを付加し、第2ゴーストG2に、第2情報ボードG2aを付加している。これは、後述の環境を同期させるための処理におけるトリガーイベント(本発明における第1トリガーイベント)をユーザUが発生させる際に、そのトリガーイベントの発生が認識された際に実行される処理を、ユーザUに直感的に理解させるためである。 Also, in this embodiment, the first ghost G1 is added with the first information board G1a, and the second ghost G2 is added with the second information board G2a. When the user U generates a trigger event (first trigger event in the present invention) in the processing for synchronizing the environments described later, the processing executed when the occurrence of the trigger event is recognized is This is to allow the user U to intuitively understand.
 しかし、本発明の仮想空間体感システムにおけるアバターがどのユーザに対応しているかを示す情報の表示方法、及び、ゴーストに関連する情報の表示方法は、このような構成に限定されるものではなく、適宜設定してよい。 However, the method of displaying information indicating which user the avatar corresponds to in the virtual space experience system of the present invention and the method of displaying information related to ghosts are not limited to such configurations. It may be set as appropriate.
 そのため、例えば、本実施形態において、第1ゴーストG1及び第2ゴーストG2の形状を第1アバターA1及び第2アバターA2の形状と一致させるにとどめ、情報ボードを省略してもよい。逆に、第1アバターA1及び第2アバターA2の形状に対応した形状を表示せずに、情報ボードのみを表示してもよい。 Therefore, for example, in the present embodiment, the information board may be omitted by simply matching the shapes of the first ghost G1 and the second ghost G2 with the shapes of the first avatar A1 and the second avatar A2. Conversely, only the information board may be displayed without displaying shapes corresponding to the shapes of the first avatar A1 and the second avatar A2.
[環境を同期させる際及びそれ以後における処理]
 次に、図2、図8~図10を参照して、環境を同期させる際及びそれ以後において実行する処理について説明する。
[Processing when synchronizing the environment and afterward]
Next, with reference to FIGS. 2 and 8 to 10, processing to be executed when synchronizing the environment and after that will be described.
 ここで、具体的な処理の内容の説明に先立って、本実施形態における環境を同期させる処理の概要を説明する。 Here, prior to describing the details of the specific processing, an outline of the processing for synchronizing the environments in this embodiment will be described.
 VRシステムSは、所定のトリガーイベント(本発明における第1トリガーイベント)の発生を認識した際に、第1ユーザU1及び第2ユーザU2の一方の環境に、第1ユーザU1及び第2ユーザU2の一方の環境を同期させる処理を行うように構成されている。 When the VR system S recognizes the occurrence of a predetermined trigger event (the first trigger event in the present invention), the VR system S adds the first user U1 and the second user U2 to the environment of one of the first user U1 and the second user U2. It is configured to perform processing for synchronizing one environment of the
 ここで、「環境」とは、仮想空間の座標に基づいて、その座標に位置しているアバターに対応するユーザが認識する画像及び音声の少なくとも一方をいう。また、「環境の同期」とは、複数のアバターに対応する複数のユーザが、同様の環境を認識するようになることをいう。 Here, the "environment" refers to at least one of images and sounds recognized by the user corresponding to the avatar located at the coordinates of the virtual space. Also, "environmental synchronization" means that a plurality of users corresponding to a plurality of avatars come to recognize similar environments.
 本実施形態では、例えば、図9に示すように、第2アバターA2が第1ユーザU1に対応している第1ゴーストG1に触れることが、そのトリガーイベントとして設定されている。 In this embodiment, for example, as shown in FIG. 9, the second avatar A2 touching the first ghost G1 corresponding to the first user U1 is set as the trigger event.
 そして、そのトリガーイベントの発生が認識された際には、図10に示すように、第2ユーザU2の座標の移動に関わらず、第2ユーザU2に対応する第2アバターA2の座標は、第1アバターA1の座標に隣接する座標へと移動する。具体的には、第2アバターA2は、第1フロアF1から、第2フロアF2の第1アバターA1に隣接する座標へと移動する。 Then, when the occurrence of the trigger event is recognized, as shown in FIG. 10, regardless of the movement of the coordinates of the second user U2, the coordinates of the second avatar A2 corresponding to the second user U2 are moved to the second 1 Move to the coordinates adjacent to the coordinates of avatar A1. Specifically, the second avatar A2 moves from the first floor F1 to coordinates adjacent to the first avatar A1 on the second floor F2.
 これにより、VRシステムSでは、そのトリガーイベントの発生が認識された時点以後においては、第1ユーザU1の環境と第2ユーザU2の環境とが同期して、第1ユーザU1及び第2ユーザU2は、同様の環境を体感することができるようになっている。 Accordingly, in the VR system S, after the occurrence of the trigger event is recognized, the environment of the first user U1 and the environment of the second user U2 are synchronized, and the first user U1 and the second user U2 can experience the same environment.
 なお、以下においては、第1アバターA1に対応する第1ユーザU1が、仮想空間VSの第2フロアF2よりもさらに上空に存在している虹VS3を、第2ユーザU2とともに眺めたい旨を第2ユーザU2にメッセージ機能などを介して伝えた際に、第2ユーザU2が第1ユーザU1と環境を同期しようとする(すなわち、自らも虹VS3を眺めようとする)際における処理を説明する。 In the following description, it is assumed that the first user U1 corresponding to the first avatar A1 wishes to view, together with the second user U2, the rainbow VS3 that exists above the second floor F2 of the virtual space VS. A description will be given of the processing when the second user U2 tries to synchronize the environment with the first user U1 (that is, he himself tries to look at the rainbow VS3) when the second user U2 is notified via a message function or the like. .
 この処理においては、まず、サーバ3のトリガーイベント認識部33は、トリガーイベントの発生を認識したか否かを判断する(図8/STEP300)。 In this process, first, the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 8/STEP 300).
 具体的には、まず、トリガーイベント認識部33は、図9に示すように、仮想空間VSで、第2ユーザU2に対応する第2アバターA2が、第1ユーザU1に対応する第1アバターA1に触れる姿勢となったか否かを判断する。そして、第2アバターA2が第1アバターA1に触れる姿勢となった時には、トリガーイベント認識部33は、トリガーイベントが発生したと認識する。 Specifically, first, as shown in FIG. 9, the trigger event recognition unit 33 recognizes that the second avatar A2 corresponding to the second user U2 is the first avatar A1 corresponding to the first user U1 in the virtual space VS. It is determined whether or not it is in a posture to touch the . Then, when the second avatar A2 assumes a posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that a trigger event has occurred.
 トリガーイベントの発生を認識しなかった場合(STEP300でNOの場合)、STEP300に戻り、再度、判断が実行される。この処理は、所定の制御周期で繰り返される。 If the occurrence of the trigger event is not recognized (NO in STEP 300), the process returns to STEP 300 and the judgment is executed again. This process is repeated at a predetermined control cycle.
 一方、トリガーイベントの発生を認識した場合(STEP300でYESの場合)、サーバ3のアバター座標決定部34は、第2アバターA2の座標を、第1アバターA1の座標に隣接する座標に移動させる(図5/STEP301)。 On the other hand, when the occurrence of the trigger event is recognized (YES in STEP 300), the avatar coordinate determination unit 34 of the server 3 moves the coordinates of the second avatar A2 to the coordinates adjacent to the coordinates of the first avatar A1 ( FIG. 5/STEP 301).
 具体的には、アバター座標決定部34は、第1アバターA1の座標だけでなく姿勢も参照して、第1アバターA1の側方、前方又は後方に位置する座標(すなわち、隣接する座標)を認識し、その座標に第2アバターA2を移動させる。 Specifically, the avatar coordinate determination unit 34 refers not only to the coordinates of the first avatar A1 but also to the posture, and determines the coordinates located to the side, front, or rear of the first avatar A1 (that is, adjacent coordinates). Recognize and move the second avatar A2 to the coordinates.
 この移動は、第2ユーザU2の座標の移動に基づく第2アバターA2の座標の移動とは、独立して行われる。なお、これにより、第2ユーザU2の座標と第2アバターA2の座標との対応関係にズレが生じる。 This movement is performed independently of the movement of the coordinates of the second avatar A2 based on the movement of the coordinates of the second user U2. It should be noted that this causes a deviation in the corresponding relationship between the coordinates of the second user U2 and the coordinates of the second avatar A2.
 本実施形態においては、図9に示すように、第1フロアF1に位置している第2アバターA2が、図10に示すように、第2フロアF2であって、第1アバターA1の左隣りに位置するように、瞬間的に移動する。 In this embodiment, as shown in FIG. 9, the second avatar A2 located on the first floor F1 is located on the second floor F2 and is next to the first avatar A1 on the left, as shown in FIG. Momentarily move so as to be located in
 なお、移動の経緯(例えば、移動にかかる時間、移動中の経路等)は、適宜設定してよい。例えば、第1ユーザU1及び第2ユーザU2とは異なる第3ユーザが存在する場合には、第2アバターA2の移動経路が認識可能なように、第2アバターA2の移動速度をある程度ゆっくりとした速度に設定することによって、トリガーイベントの発生が認識された際における座標の移動を、第3ユーザに認識させることができるようになる。 It should be noted that the history of the movement (eg, the time it takes to move, the route during the movement, etc.) may be set as appropriate. For example, when there is a third user different from the first user U1 and the second user U2, the movement speed of the second avatar A2 is slowed to some extent so that the movement route of the second avatar A2 can be recognized. By setting the velocity, it becomes possible for the third user to recognize the movement of the coordinates when the occurrence of the trigger event is recognized.
 次に、サーバ3の表示画像生成部31は、第1ゴーストG1及び第2ゴーストG2を消去する(図8/STEP302)。 Next, the display image generator 31 of the server 3 erases the first ghost G1 and the second ghost G2 (FIG. 8/STEP 302).
 本実施形態においては、STEP301における第2アバターA2の移動によって、前述の仮想空間を拡張する処理によって生じた、第1アバターA1の座標と第2アバターA2の座標との対応関係におけるズレが解消される。すなわち、その移動後には、第1アバターA1の座標と第2アバターA2の座標との対応関係が、そのズレが生じる前の対応関係(すなわち、第1ユーザU1の座標と第2ユーザU2の座標との対応関係に一致したもの)に戻る。 In this embodiment, the movement of the second avatar A2 in STEP 301 eliminates the deviation in the corresponding relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2, which is caused by the process of expanding the virtual space. be. That is, after the movement, the correspondence relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2 is changed from the correspondence relationship before the deviation occurred (that is, the coordinates of the first user U1 and the coordinates of the second user U2). ).
 そのため、このSTEP302における処理を実行した後においては、第1ゴーストG1及び第2ゴーストG2が存在していなくても、意図しない接触が抑制されることになる。むしろ、第1ゴーストG1及び第2ゴーストG2が存在している場合には、逆に意図しない接触を発生させてしまうおそれがある。そこで、本実施形態においては、このように、不要となった第1ゴーストG1及び第2ゴーストG2を消去する処理を実行している。 Therefore, after executing the process in STEP302, unintended contact is suppressed even if the first ghost G1 and the second ghost G2 do not exist. Rather, when the first ghost G1 and the second ghost G2 are present, there is a risk that unintended contact may occur. Therefore, in this embodiment, the process of erasing the unnecessary first ghost G1 and second ghost G2 is performed.
 なお、STEP302における処理の実行後においても、第1アバターA1の座標と第2アバターA2の座標との対応関係と、第1ユーザU1の座標と第2ユーザU2の対応座標との関係とが、一致したものに戻らない場合等には、必要に応じて、第1ゴーストG1及び第2ゴーストG2の少なくとも一方を、引き続き存在させるようにするとよい。 Note that even after execution of the process in STEP 302, the correspondence relationship between the coordinates of the first avatar A1 and the coordinates of the second avatar A2 and the relationship between the coordinates of the first user U1 and the corresponding coordinates of the second user U2 are If there is no match, etc., at least one of the first ghost G1 and the second ghost G2 should continue to exist as required.
 次に、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第1ユーザU1及び第2ユーザU2の現実空間RSにおける座標及び姿勢に基づいて、第1アバターA1及び第1ゴーストG1、並びに、第2アバターA2及び第2ゴーストG2の仮想空間VSにおける座標及び姿勢を決定する(図8/STEP303)。 Next, the avatar coordinate determination unit 34 of the server 3 determines the first avatar A1 and the first ghost based on the coordinates and postures in the physical space RS of the first user U1 and the second user U2 recognized by the user information recognition unit 32. The coordinates and orientations of G1, the second avatar A2, and the second ghost G2 in the virtual space VS are determined (FIG. 8/STEP 303).
 具体的には、アバター座標決定部34は、引き続き、第1ユーザの座標及び姿勢に基づいて決定された座標及び姿勢を、前述の仮想空間VSを拡張させる処理において発生したズレの内容(すなわち、補正方向及び補正量)に基づいて補正して、第1アバターA1の座標及び姿勢を決定する。 Specifically, the avatar coordinate determination unit 34 continues to determine the coordinates and orientation determined based on the coordinates and orientation of the first user by determining the details of the deviation that occurred in the process of expanding the virtual space VS (i.e., correction direction and correction amount) to determine the coordinates and orientation of the first avatar A1.
 また、アバター座標決定部34は、第2ユーザの座標及び姿勢に基づいて決定された座標及び姿勢を、認識されたトリガーイベントの種類に応じて予め定められているズレの内容(すなわち、補正方向及び補正量)に基づいて補正して、第2アバターA2の座標及び姿勢を決定する。 In addition, the avatar coordinate determination unit 34 determines the coordinates and posture determined based on the coordinates and posture of the second user as the content of the deviation (that is, the correction direction) predetermined according to the type of the recognized trigger event. and correction amount) to determine the coordinates and posture of the second avatar A2.
 本実施形態においては、アバター座標決定部34は、第2ユーザU2の座標に基づく第2アバターA2の座標を、第1フロアF1に対する第2フロアF2の高さの分だけ上方向に移動させ、且つ、トリガーイベントの発生が認識された時点における第2アバターA2の座標から第1アバターA1の左隣りの座標まで水平方向に移動させたものを、トリガーイベントの発生が認識された時点以後における第2アバターA2の座標として決定する。 In this embodiment, the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 based on the coordinates of the second user U2 upward by the height of the second floor F2 relative to the first floor F1, In addition, the horizontal movement from the coordinates of the second avatar A2 at the time when the occurrence of the trigger event is recognized to the coordinates to the left of the first avatar A1 is the coordinates after the time when the occurrence of the trigger event is recognized. 2 Determined as coordinates of avatar A2.
 また、アバター座標決定部34は、第2ユーザU2の姿勢に基づく第2アバターA2の姿勢を、そのまま、トリガーイベントの発生が認識された時点以後における第2アバターA2の姿勢として決定する。 Also, the avatar coordinate determination unit 34 determines the posture of the second avatar A2 based on the posture of the second user U2 as the posture of the second avatar A2 after the occurrence of the trigger event is recognized.
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1に認識させる画像及び音声を、第1アバターA1の仮想空間VSにおける座標及び姿勢に基づいて決定し、第2ユーザU2に認識させる画像及び音声を、第2アバターA2の仮想空間VSにおける座標及び姿勢に基づいて決定する(図8/STEP304)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. , the image and sound to be recognized by the second user U2 are determined based on the coordinates and posture of the second avatar A2 in the virtual space VS (FIG. 8/STEP 304).
 次に、ユーザUの装着しているHMD4が、モニタ41に決定された画像を表示し、スピーカ42に音声を発生する(図8/STEP305)。 Next, the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 8/STEP 305).
 次に、サーバ3のユーザ情報認識部32は、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されたか否かを判断する(図8/STEP306)。 Next, the user information recognition unit 32 of the server 3 determines whether or not the movement of the coordinates or the change of the posture in the physical space RS of the first user U1 or the second user U2 has been recognized (FIG. 8/STEP 306).
 第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識された場合(STEP306でYESの場合)、STEP303に戻り、再度、STEP303以降の処理が実行される。 If the movement of the coordinates or the change in the posture of the first user U1 or the second user U2 in the physical space RS is recognized (YES in STEP306), the process returns to STEP303, and the processes after STEP303 are executed again.
 一方、第1ユーザU1又は第2ユーザU2の現実空間RSにおける座標の移動又は姿勢の変化が認識されなかった場合(STEP306でNOの場合)、サーバ3は、処理の終了を指示する信号を認識したか否かを判断する(図8/STEP307)。 On the other hand, if the movement of the coordinates or the change in posture of the first user U1 or the second user U2 in the physical space RS is not recognized (NO in STEP 306), the server 3 recognizes the signal instructing the end of the process. It is determined whether or not it has been completed (FIG. 8/STEP 307).
 終了を指示する信号を認識できなかった場合(STEP307でNOの場合)、STEP306に戻り、再度、STEP306以降の処理が実行される。 If the signal instructing the end could not be recognized (NO in STEP 307), the process returns to STEP 306, and the processes after STEP 306 are executed again.
 一方、終了を指示する信号を認識した場合(STEP307でYESの場合)、VRシステムSは、今回の処理を終了する。 On the other hand, if the signal instructing the end is recognized (YES in STEP307), the VR system S ends the current process.
 以上説明したように、VRシステムSでは、トリガーイベントの発生が認識された際には、アバター座標決定部34は、第2アバターA2の座標を、第1アバターA1の座標に隣接する座標に移動させている。 As described above, in the VR system S, when the occurrence of the trigger event is recognized, the avatar coordinate determination unit 34 moves the coordinates of the second avatar A2 to coordinates adjacent to the coordinates of the first avatar A1. I am letting
 ここで、第1ユーザU1及び第2ユーザU2に認識させる仮想空間VSの画像及び音声は、第1アバターA1及び第2アバターA1の座標に基づいて決定される。そのため、アバター同士の座標が一致又は隣接している場合には、それらのアバターに対応する各々のユーザの認識する仮想空間VSの画像及び音声も、一致又は類似した画像及び音声(すなわち、同様の画像及び音声)になる。 Here, the images and sounds of the virtual space VS recognized by the first user U1 and the second user U2 are determined based on the coordinates of the first avatar A1 and the second avatar A1. Therefore, when the coordinates of the avatars match or are adjacent to each other, the images and sounds in the virtual space VS recognized by each user corresponding to those avatars are also matched or similar images and sounds (that is, similar image and sound).
 したがって、VRシステムSによれば、トリガーイベントの発生が認識された際には、各々のユーザの認識する仮想空間の画像及び音声も一致又は隣接した画像及び音声にすることができるので、それまで仮想空間で異なる環境を体感していた第1ユーザU1及び第2ユーザU2が、ほぼ同じ環境を体感することができるようになる。 Therefore, according to the VR system S, when the occurrence of the trigger event is recognized, the images and sounds in the virtual space recognized by each user can be matched or adjacent to each other. The first user U1 and the second user U2 who experienced different environments in the virtual space can now experience substantially the same environment.
 なお、本実施形態においては、第2ユーザU2が自らの環境を第1ユーザU1の環境に同期させる場合について説明した。しかし、本発明の仮想空間体感システムは、そのような構成に限定されるものではなく、トリガーイベントの発生が認識された際に第1ユーザ及び第2ユーザの一方の環境を、第1ユーザ及び第2ユーザの他方の環境に同期させるものであればよい。 In addition, in the present embodiment, a case has been described in which the second user U2 synchronizes his/her own environment with the environment of the first user U1. However, the virtual space experience system of the present invention is not limited to such a configuration. Anything that synchronizes with the other environment of the second user may be used.
 そのため、例えば、本実施形態において、第1アバターA1が第2ユーザU2に対応する第2ゴーストG2に触れた場合にも、第2アバターA2の座標を第1アバターA1の座標に隣接する座標に移動させるように構成してもよい。すなわち、第2アバターA2が自ら第1アバターA1の座標に一致する座標又は隣接する座標に移動するのではなく、第1アバターA1が第2アバターA2の座標を自らの座標に一致する座標又は隣接する座標に呼び寄せるように構成してもよい。 Therefore, for example, in the present embodiment, even when the first avatar A1 touches the second ghost G2 corresponding to the second user U2, the coordinates of the second avatar A2 are set to the coordinates adjacent to the coordinates of the first avatar A1. It may be configured to move. That is, instead of the second avatar A2 moving itself to coordinates matching or adjacent to the coordinates of the first avatar A1, the first avatar A1 moves the coordinates of the second avatar A2 to coordinates matching or adjacent to its own coordinates. It may be configured to call to the coordinates that
 また、例えば、本実施形態において、第2アバターA2が第1ユーザU1に対応する第1ゴーストG1に触れた場合に、第1アバターA1が自ら第2アバターA2の座標に一致する座標又は隣接する座標に移動するように構成してもよいし、第1アバターA1が第2アバターA2の座標を自らの座標に一致する座標又は隣接する座標に呼び寄せるように構成してもよい。 Further, for example, in the present embodiment, when the second avatar A2 touches the first ghost G1 corresponding to the first user U1, the first avatar A1 itself has coordinates matching or adjacent to the coordinates of the second avatar A2. Alternatively, the first avatar A1 may call the coordinates of the second avatar A2 to coordinates that match or are adjacent to the coordinates of the first avatar A1.
 なお、自らの座標が他のユーザの座標に一致する座標又は隣接する座標へ移動する処理を行う構成と、他のユーザの座標を自らの座標に一致する座標又は隣接する座標に呼び寄せる処理を行う構成とを同時に実装する場合には、トリガーイベントとなる動作を、行われる処理に応じて異ならせればよい。 In addition, a configuration for performing a process of moving one's own coordinates to coordinates matching or adjacent to the coordinates of another user and a process of calling other users' coordinates to coordinates matching or adjacent to the coordinates of one's own are performed. When implementing configuration and configuration at the same time, the action that becomes the trigger event may be made different according to the processing to be performed.
 具体的には、例えば、自らに対応するアバターの右手で他のユーザに対応するアバターに触れた場合には、自らの座標が他のユーザの座標に一致する座標又は隣接する座標へ移動する処理を実行し、自らに対応するアバターの左手で他のユーザに対応するアバターに触れた場合には、他のユーザの座標を自らの座標に一致する座標又は隣接する座標に呼び寄せる処理を実行するというように構成するとよい。 Specifically, for example, when the avatar corresponding to another user is touched with the right hand of the avatar corresponding to the user, the process of moving the user's own coordinates to coordinates matching or adjacent to the coordinates of the other user. , and when the left hand of the avatar corresponding to itself touches the avatar corresponding to another user, the other user's coordinates are called to the coordinates that match or are adjacent to the own coordinates. It should be configured as follows.
 また、本実施形態においては、トリガーイベントの発生が認識された場合に、第2アバターA2の座標を第1アバターA1の座標に一致する座標ではなく、隣接する座標に移動させる処理を行っている。これは、第2アバターA2の座標が移動した際に、一方のユーザの認識する画像に他方のユーザに対応するアバターを含ませやすくして、一方のユーザに他方のユーザが自らの近傍に移動してきたことを認識させやすくするためである。 Further, in the present embodiment, when the occurrence of the trigger event is recognized, processing is performed to move the coordinates of the second avatar A2 not to the coordinates that match the coordinates of the first avatar A1 but to the coordinates that are adjacent to them. . This makes it easy for one user to include an avatar corresponding to the other user in the image recognized by one user when the coordinates of the second avatar A2 move, and for one user to move the other user closer to himself/herself. This is to make it easier for you to recognize what you have done.
 しかし、本発明は、そのような構成に限定されるものではなく、一方のアバターを移動させる座標を、他方のアバターの座標と一致させてもよい。このように構成した場合には、環境の同期の度合いを、隣接する座標に移動させた場合に比べて、さらに高めることができる。 However, the present invention is not limited to such a configuration, and the coordinates for moving one avatar may be matched with the coordinates of the other avatar. With this configuration, the degree of environmental synchronization can be further enhanced compared to the case of moving to adjacent coordinates.
 また、本実施形態においては、トリガーイベントの発生が認識された以後においては、第2アバターA2は、継続して第2フロアF2に位置する。すなわち、トリガーイベントの発生が認識された際における移動によって生じたズレに基づいて、第2ユーザU2の座標と第2アバターA2の座標との対応関係にズレが生じた状態が継続されている。 Also, in this embodiment, after the occurrence of the trigger event is recognized, the second avatar A2 continues to be located on the second floor F2. That is, the state in which the correspondence relationship between the coordinates of the second user U2 and the coordinates of the second avatar A2 continues to be deviated based on the deviation caused by the movement when the occurrence of the trigger event is recognized.
 しかし、本発明は、そのような構成に限定されるものではなく、トリガーイベントの発生が認識された際における移動によって生じたズレを、その後に解消するように構成してもよい。 However, the present invention is not limited to such a configuration, and may be configured to eliminate the deviation caused by the movement when the occurrence of the trigger event is recognized after that.
 そのため、例えば、トリガーイベントが継続している期間だけ(例えば、上記実施形態では、第2アバターA2が第1アバターA1に触れている期間だけ)、又は、トリガーイベントの発生が認識されてから所定の期間だけ、アバターの座標を移動させ、その期間の経過後には、アバターの座標を、トリガーイベントの発生が認識される前における座標に戻すように構成してもよい。 Therefore, for example, only during the period during which the trigger event continues (for example, in the above embodiment, during the period during which the second avatar A2 is touching the first avatar A1), or after the occurrence of the trigger event is recognized, a predetermined may be configured to move the avatar's coordinates for a period of time, and after the period elapses, return the avatar's coordinates to the coordinates before the occurrence of the triggering event was recognized.
[第2実施形態]
 以下、図2、図11~図15を参照して、第2実施形態に係る仮想空間体感システムであるVRシステムSについて説明する。
[Second embodiment]
A VR system S, which is a virtual space experience system according to the second embodiment, will be described below with reference to FIGS. 2 and 11 to 15. FIG.
 なお、本実施形態のVRシステムSは、環境を同期させる際及びそれ以後において実行する処理を除き、第1実施形態のVRシステムSと同様の構成を備えている。 Note that the VR system S of this embodiment has the same configuration as the VR system S of the first embodiment, except for the processing that is executed when synchronizing the environment and after that.
 そのため、以下の説明においては、環境を同期させる際及びそれ以後において実行する処理についてのみ説明する。また、第1実施形態のVRシステムSの構成と同一の構成又は対応する構成については、同一の符号を付すとともに、詳細な説明は省略する。 Therefore, in the following explanation, only the processing that is executed when synchronizing the environment and afterward will be explained. In addition, configurations that are the same as or corresponding to the configuration of the VR system S of the first embodiment are denoted by the same reference numerals, and detailed description thereof will be omitted.
[環境を同期させる際及びそれ以後における処理]
 以下に、図2、図11~図15を参照して、環境を同期させる際及びそれ以後において実行する処理について説明する。
[Processing when synchronizing the environment and afterward]
Processing executed when synchronizing the environments and after that will be described below with reference to FIGS. 2 and 11 to 15. FIG.
 ここで、具体的な処理の内容の説明に先立って、本実施形態における環境を同期させる処理の概要を説明する。 Here, prior to describing the details of the specific processing, an outline of the processing for synchronizing the environments in this embodiment will be described.
 VRシステムSは、所定のトリガーイベント(本発明における第1トリガーイベント)の発生を認識した際に、第1ユーザU1及び第2ユーザU2の一方の環境に、第1ユーザU1及び第2ユーザU2の一方の環境を同期させる処理を行うように構成されている。 When the VR system S recognizes the occurrence of a predetermined trigger event (the first trigger event in the present invention), the VR system S adds the first user U1 and the second user U2 to the environment of one of the first user U1 and the second user U2. It is configured to perform processing for synchronizing one environment of the
 本実施形態では、例えば、図12に示すように、第2アバターA2が第1ユーザU1に対応している第1ゴーストG1に触れることが、そのトリガーイベントとして設定されている。 In this embodiment, for example, as shown in FIG. 12, touching the first ghost G1 corresponding to the first user U1 by the second avatar A2 is set as the trigger event.
 そして、そのトリガーイベントの発生が認識された際には、図13に示すように、第2ユーザU2の座標に関わらず、第2ユーザU2が認識する環境は、第1ユーザU1が認識している環境と一致したものとなる。具体的には、第2ユーザU2が認識する環境は、第2アバターA2の座標に基づくもの(図14参照)から、第1アバターA1の座標に基づくもの(図15参照)に変化する。 Then, when the occurrence of the trigger event is recognized, as shown in FIG. 13, the environment recognized by the second user U2 is recognized by the first user U1 regardless of the coordinates of the second user U2. be consistent with the environment in which they exist. Specifically, the environment recognized by the second user U2 changes from one based on the coordinates of the second avatar A2 (see FIG. 14) to one based on the coordinates of the first avatar A1 (see FIG. 15).
 これにより、VRシステムSでは、第1ユーザU1の環境と第2ユーザU2の環境とを同期させて、第1ユーザU1及び第2ユーザU2が、同様の環境を体感することができるようになっている。 Accordingly, in the VR system S, the environment of the first user U1 and the environment of the second user U2 are synchronized so that the first user U1 and the second user U2 can experience the same environment. ing.
 なお、以下においては、第1アバターA1に対応する第1ユーザU1が、仮想空間VSの第2フロアF2よりもさらに上空に存在している虹VS3を、第2ユーザU2とともに眺めたい旨を第2ユーザU2にメッセージ機能などを介して伝えた際に、第2ユーザU2が第1ユーザU1と環境を同期しようとする(すなわち、自らも虹VS3を眺めようとする)際における処理を説明する。 In the following description, it is assumed that the first user U1 corresponding to the first avatar A1 wishes to view, together with the second user U2, the rainbow VS3 that exists above the second floor F2 of the virtual space VS. A description will be given of the processing when the second user U2 tries to synchronize the environment with the first user U1 (that is, he himself tries to look at the rainbow VS3) when the second user U2 is notified via a message function or the like. .
 なお、図14に示すように、この処理の開始時の状態においては、第2ユーザU2は、仮想空間VSの第2フロアF2にいる第1アバターA1を見上げている状態の画像を認識しているものとする。 As shown in FIG. 14, at the start of this process, the second user U2 recognizes an image of the first avatar A1 on the second floor F2 of the virtual space VS looking up. It is assumed that there is
 この処理においては、まず、サーバ3のトリガーイベント認識部33は、トリガーイベントの発生を認識したか否かを判断する(図11/STEP400)。 In this process, first, the trigger event recognition unit 33 of the server 3 determines whether or not the occurrence of the trigger event has been recognized (FIG. 11/STEP 400).
 具体的には、まず、トリガーイベント認識部33は、図12に示すように、仮想空間VSで、第2ユーザU2に対応する第2アバターA2が、第1ユーザU1に対応する第1アバターA1に触れる姿勢となったか否かを判断する。そして、第2アバターA2が第1アバターA1に触れる姿勢となった時には、トリガーイベント認識部33は、トリガーイベントが発生したと認識する。 Specifically, first, as shown in FIG. 12, the trigger event recognizing unit 33 recognizes that the second avatar A2 corresponding to the second user U2 is the first avatar A1 corresponding to the first user U1 in the virtual space VS. It is determined whether or not it is in a posture to touch the . Then, when the second avatar A2 assumes a posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that a trigger event has occurred.
 トリガーイベントの発生を認識した場合(STEP400でYESの場合)、サーバ3のアバター座標決定部34は、第2アバターA2及び第2ゴーストG2の座標を固定する(図11/STEP401)。 When the occurrence of the trigger event is recognized (YES in STEP400), the avatar coordinate determination unit 34 of the server 3 fixes the coordinates of the second avatar A2 and the second ghost G2 (FIG. 11/STEP401).
 本実施形態では、この後に実行されるSTEP402~STEP404の処理(すなわち、環境を同期させるための処理)において、第2アバターA2及び第2ゴーストG2に対応する第2ユーザU2の認識する画像及び音声が、第1アバターA1の座標に基づいて決定されたものにする処理が実行される。 In the present embodiment, in the subsequent processing of STEP402 to STEP404 (that is, the processing for synchronizing the environment), the images and sounds recognized by the second user U2 corresponding to the second avatar A2 and the second ghost G2 is determined based on the coordinates of the first avatar A1.
 そのため、その処理の実行されている最中に第2アバターA2が移動できてしまうと、環境を同期させるための処理の終了後に、意図せず第2アバターA2の座標が移動している状態となる。ひいては、その処理の終了後に、その第2アバターA2の座標に基づいて画像及び音声を認識する第2ユーザU2の没入感を阻害してしまうおそれがある。 Therefore, if the second avatar A2 is able to move while the process is being executed, the coordinates of the second avatar A2 will unintentionally move after the process for synchronizing the environments. Become. As a result, after the process is finished, there is a possibility that the sense of immersion of the second user U2, who recognizes the image and sound based on the coordinates of the second avatar A2, is disturbed.
 また、その処理の実行されている最中に第2ゴーストG2が移動できてしまうと、環境を同期させるための処理の終了後に、意図せず第2ゴーストG2の座標が移動している状態となる。ひいては、その処理の終了後に、第1ユーザU1の認識している画像において、突然第2ゴーストG2の位置が変わってしまうことになり、第1ユーザU1の没入感を阻害してしまうおそれがある。 Also, if the second ghost G2 is able to move while the process is being executed, the coordinates of the second ghost G2 may unintentionally move after the process for synchronizing the environments is completed. Become. As a result, after the process ends, the position of the second ghost G2 suddenly changes in the image recognized by the first user U1, which may hinder the first user U1's sense of immersion. .
 そこで、VRシステムSでは、トリガーイベントの発生を認識した場合(すなわち、環境を同期させる処理の実行が開始される際には)、このSTEP401の処理を実行して、第2アバターA2及び第2ゴーストG2の座標を固定することによって、そのようにして没入感が阻害されてしまうことを抑制している。 Therefore, in the VR system S, when recognizing the occurrence of the trigger event (that is, when the execution of the processing for synchronizing the environment is started), the processing of STEP 401 is executed, and the second avatar A2 and the second avatar A2 By fixing the coordinates of the ghost G2, such impediment to the sense of immersion is suppressed.
 ただし、図12及び図13に示すように、その処理の実行されている最中であっても、第2アバターA2及び第2ゴーストG2の姿勢は、引き続き第2アバターA2の姿勢に応じて変化する。 However, as shown in FIGS. 12 and 13, even while the process is being executed, the postures of the second avatar A2 and the second ghost G2 continue to change according to the posture of the second avatar A2. do.
 これは、姿勢まで固定されてしまうと、環境を同期させるための処理の終了後に、第2ユーザU2の姿勢と第2アバターA2の姿勢及び第2ゴーストG2の姿勢との対応関係にズレが生じてしまうおそれがあり、その一方で、環境を同期させるための処理の最中に、第2ゴーストG2の姿勢が変化したとしても、第1ユーザU1の没入感を阻害しないためである。 This is because if even the posture is fixed, the correspondence relationship between the posture of the second user U2, the posture of the second avatar A2, and the posture of the second ghost G2 will be deviated after the processing for synchronizing the environment is completed. On the other hand, even if the posture of the second ghost G2 changes during the processing for synchronizing the environment, the immersive feeling of the first user U1 is not hindered.
 ただし、第2アバターA2の姿勢も、第1アバターA1の姿勢と一致するように構成してもよい。このように構成した場合には、第2ユーザU2の認識する画像及び音声が第1ユーザU1の認識する画像及び音声とさらに一致したものになるので、第2ユーザU2は、第1ユーザU1とさらに同期した環境を体感することができるようになる。 However, the posture of the second avatar A2 may also be configured to match the posture of the first avatar A1. With this configuration, the image and sound recognized by the second user U2 are more consistent with the image and sound recognized by the first user U1. You will be able to experience a more synchronized environment.
 なお、このように第2アバターA2及び第2ゴーストG2の座標を固定した状態で、第2ユーザU2が移動してしまうと、第2ユーザU2の座標と第2アバターA2の座標及び第2ゴーストG2の座標との対応関係に、意図せずズレが生じてしまうことになる。ひいては、意図しない接触を誘発する要因が発生してしまう。 Note that if the second user U2 moves while the coordinates of the second avatar A2 and the second ghost G2 are thus fixed, the coordinates of the second user U2, the coordinates of the second avatar A2, and the second ghost Unintentional deviation occurs in the correspondence relationship with the coordinates of G2. As a result, a factor that induces unintended contact occurs.
 そこで、図15に示すように、VRシステムSでは、環境同期処理を実行している旨の表示とともに、移動をしないように注意する旨のメッセージを、第2ユーザU2の認識する画像に表示する。これにより、VRシステムSでは、そのようなズレの発生を抑制している。 Therefore, as shown in FIG. 15, in the VR system S, a message to the effect that environment synchronization processing is being executed and a message to the effect that cautions against movement are displayed on the image recognized by the second user U2. . Accordingly, the VR system S suppresses the occurrence of such deviations.
 実行する処理の説明に戻り、STEP401の処理を実行した後には、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第1ユーザU1の現実空間RSにおける座標及び姿勢に基づいて、第1アバターA1及び第1ゴーストG1の仮想空間VSにおける座標及び姿勢を決定する(図11/STEP402)。 Returning to the description of the processing to be executed, after executing the processing of STEP 401, the avatar coordinate determination unit 34 of the server 3 determines the coordinates and posture of the first user U1 in the physical space RS recognized by the user information recognition unit 32. , coordinates and orientations of the first avatar A1 and the first ghost G1 in the virtual space VS (FIG. 11/STEP 402).
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1及び第2ユーザU2に認識させる画像及び音声を、第1アバターA1の仮想空間VSにおける座標及び姿勢に基づいて決定する(図5/STEP403)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 and the second user U2 as the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 5/STEP 403).
 具体的には、仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1に認識させる画像及び音声を、第1アバターA1の座標及び姿勢に基づいて決定する。また、仮想空間画像決定部35及び仮想空間音声決定部36は、第2ユーザU2に認識させる画像及び音声を、第1アバターA1の座標、及び、第2アバターA2の姿勢に基づいて決定する。 Specifically, the virtual space image determining unit 35 and the virtual space audio determining unit 36 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1. Also, the virtual space image determination unit 35 and the virtual space audio determination unit 36 determine the image and audio to be recognized by the second user U2 based on the coordinates of the first avatar A1 and the posture of the second avatar A2.
 このようにして決定された第1ユーザU1に認識させる画像及び音声は、第1アバターA1の座標及び姿勢に基づいて、仮想空間VSの第2フロアF2で、上空にある虹VS3を眺めているものになる。 The image and voice determined in this manner and recognized by the first user U1 are based on the coordinates and posture of the first avatar A1, and are based on the second floor F2 of the virtual space VS, and are looking at the rainbow VS3 in the sky above. become a thing.
 また、このようにして決定された第2ユーザU2に認識させる画像及び音声は、第1アバターA1の座標よりも第1アバターA1から見て後ろ側となる座標に位置している場合に認識される画像及び音声となる。具体的には、図15に示すように、仮想空間VSの第2フロアF2で、上空にある虹VS3を、第1アバターA1の背後から眺めているものになる。 In addition, the image and voice determined in this manner to be recognized by the second user U2 are recognized when positioned at coordinates behind the coordinates of the first avatar A1 as viewed from the first avatar A1. image and sound. Specifically, as shown in FIG. 15, the rainbow VS3 in the sky is viewed from behind the first avatar A1 on the second floor F2 of the virtual space VS.
 なお、本実施形態では、第2ユーザU2が第1ユーザU1とともに環境を楽しむことを考慮しているので、第1アバターA1(ひいては、第1ユーザU1)を認識できるように、この処理において第2ユーザU2に認識させる画像及び音声の基準となる座標を、第1アバターA1の座標の後ろ側となる座標(すなわち、隣接する座標)としている。 In this embodiment, since the second user U2 enjoys the environment together with the first user U1, the first avatar A1 (and thus the first user U1) can be recognized in this process. The coordinates that serve as references for the images and sounds that are recognized by the second user U2 are the coordinates behind the coordinates of the first avatar A1 (that is, the adjacent coordinates).
 しかし、第2ユーザU2が第1ユーザU1の体感している環境とさらに近い環境を体感したいというような場合には、基準となる座標を、第1アバターA1の座標と一致させるようにするとよい。また、基準となる姿勢も第1アバターA1と一致させた場合には、第2ユーザU2は、第1ユーザU1の体感している環境と同じ環境を体感することができるようになる。 However, if the second user U2 wants to experience an environment that is closer to the environment experienced by the first user U1, it is preferable to match the reference coordinates with the coordinates of the first avatar A1. . Also, when the reference posture is also matched with the first avatar A1, the second user U2 can experience the same environment as the first user U1 is experiencing.
 次に、ユーザUの装着しているHMD4は、HMD4に搭載されているモニタ41に決定された画像を表示させ、HMD4に搭載されているスピーカ42に決定された音声を発生させる(図11/STEP404)。 Next, the HMD 4 worn by the user U causes the monitor 41 mounted on the HMD 4 to display the determined image, and the speaker 42 mounted on the HMD 4 to generate the determined sound (FIG. 11/ STEP 404).
 次に、サーバ3のトリガーイベント認識部33は、トリガーイベントの解除を認識したか否かを判断する(図11/STEP405)。 Next, the trigger event recognition unit 33 of the server 3 determines whether or not it has recognized the cancellation of the trigger event (FIG. 11/STEP 405).
 具体的には、まず、トリガーイベント認識部33は、仮想空間VSで、第2ユーザU2に対応する第2アバターA2が、第1ユーザU1に対応する第1アバターA1に触れる姿勢(図12参照)を解除したか否かを判断する。そして、第2アバターA2が第1アバターA1に触れる姿勢を解除した時には、トリガーイベント認識部33は、トリガーイベントが解除されたと認識する。 Specifically, first, the trigger event recognition unit 33 determines that the second avatar A2 corresponding to the second user U2 touches the first avatar A1 corresponding to the first user U1 in the virtual space VS (see FIG. 12). ) is released. Then, when the second avatar A2 releases the posture of touching the first avatar A1, the trigger event recognition unit 33 recognizes that the trigger event has been released.
 なお、トリガーイベントの解除は、例えば、トリガーイベントの発生から所定の期間が経過したときに自動的に認識されるように構成されていてもよいし、他のトリガーイベントが発生したとき(例えば、第2アバターA2が第1ゴーストG1に触れていない状態になったとき等)に認識されるように構成されていてもよい。 Note that the release of the trigger event may be configured to be automatically recognized, for example, when a predetermined period of time has passed since the occurrence of the trigger event, or when another trigger event occurs (for example, The second avatar A2 may be configured to be recognized when the second avatar A2 is not touching the first ghost G1.
 トリガーイベントの解除を認識しなかった場合(STEP405でNOの場合)、STEP402に戻り、再度、STEP402以降の処理が実行される。 If the cancellation of the trigger event is not recognized (NO in STEP 405), the process returns to STEP 402, and the processes after STEP 402 are executed again.
 一方、トリガーイベントの解除を認識した場合(STEP405でYESの場合)、サーバ3のアバター座標決定部34は、第2アバターA2及び第2ゴーストG2の座標の固定を解除する(図11/STEP406)。 On the other hand, if the release of the trigger event is recognized (YES in STEP405), the avatar coordinate determination unit 34 of the server 3 releases the fixation of the coordinates of the second avatar A2 and the second ghost G2 (FIG. 11/STEP406). .
 トリガーイベントの発生を認識しなかった場合(STEP400でNOの場合)、又は、第2アバターA2及び第2ゴーストG2の座標の固定を解除した後には、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第1ユーザU1の現実空間RSにおける座標及び姿勢に基づいて、第1アバターA1及び第1ゴーストG1の仮想空間VSにおける座標及び姿勢を決定する(図11/STEP407)。 If the occurrence of the trigger event is not recognized (NO in STEP 400), or after releasing the fixation of the coordinates of the second avatar A2 and the second ghost G2, the avatar coordinate determination unit 34 of the server 3 determines that the user Based on the coordinates and orientation of the first user U1 in the physical space RS recognized by the information recognition unit 32, the coordinates and orientation of the first avatar A1 and the first ghost G1 in the virtual space VS are determined (FIG. 11/STEP 407).
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第1ユーザU1に認識させる画像及び音声を、第1アバターA1の仮想空間VSにおける座標及び姿勢に基づいて決定する(図11/STEP408)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the first user U1 based on the coordinates and posture of the first avatar A1 in the virtual space VS. (FIG. 11/STEP 408).
 次に、サーバ3のアバター座標決定部34は、ユーザ情報認識部32が認識した第2ユーザU2の現実空間RSにおける座標及び姿勢に基づいて、第2アバターA2及び第2ゴーストG2の仮想空間VSにおける座標及び姿勢を決定する(図11/STEP409)。 Next, the avatar coordinate determination unit 34 of the server 3 determines the virtual space VS of the second avatar A2 and the second ghost G2 based on the coordinates and posture of the second user U2 in the physical space RS recognized by the user information recognition unit 32. Determine the coordinates and attitude in (FIG. 11/STEP 409).
 次に、サーバ3の仮想空間画像決定部35及び仮想空間音声決定部36は、第2ユーザU2に認識させる画像及び音声を、第2アバターA2の仮想空間VSにおける座標及び姿勢に基づいて決定する(図11/STEP410)。 Next, the virtual space image determining unit 35 and the virtual space audio determining unit 36 of the server 3 determine the image and audio to be recognized by the second user U2 based on the coordinates and posture of the second avatar A2 in the virtual space VS. (FIG. 11/STEP 410).
 次に、ユーザUの装着しているHMD4が、モニタ41に決定された画像を表示し、スピーカ42に音声を発生する(図11/STEP411)。 Next, the HMD 4 worn by the user U displays the determined image on the monitor 41 and emits sound from the speaker 42 (FIG. 11/STEP 411).
 次に、サーバ3は、処理の終了を指示する信号を認識したか否かを判断する(図11/STEP412)。 Next, the server 3 determines whether or not it has recognized a signal instructing the end of processing (FIG. 11/STEP 412).
 終了を指示する信号を認識できなかった場合(STEP412でNOの場合)、STEP400に戻り、再度、STEP400以降の処理が実行される。 If the signal instructing the end could not be recognized (NO in STEP 412), the process returns to STEP 400, and the processes after STEP 400 are executed again.
 一方、終了を指示する信号を認識した場合(STEP412でYESの場合)、VRシステムSは、今回の処理を終了する。 On the other hand, if the signal instructing the end is recognized (YES in STEP412), the VR system S ends the current process.
 このように、VRシステムSでは、トリガーイベントの発生が認識された際には、仮想空間画像決定部35及び仮想空間音声決定部36は、第2ユーザU2に認識させる仮想空間VSの画像及び音声を、第2アバターA2の座標が第1アバターA1の座標に隣接する座標に移動した場合における画像及び音声にしている。 As described above, in the VR system S, when the occurrence of the trigger event is recognized, the virtual space image determining unit 35 and the virtual space audio determining unit 36 generate images and sounds of the virtual space VS to be recognized by the second user U2. is the image and sound when the coordinates of the second avatar A2 move to the coordinates adjacent to the coordinates of the first avatar A1.
 すなわち、トリガーイベントの発生が認識された際には、第2ユーザU2に認識させる仮想空間VSの画像及び音声を決定するための基準となる座標を、第2ユーザU2に対応する第2アバターA2の座標の位置に関係なく、第1ユーザU1に対応する第1アバターA1の座標に隣接する座標としている。 That is, when the occurrence of the trigger event is recognized, the coordinates serving as the reference for determining the image and sound of the virtual space VS to be recognized by the second user U2 are set to the second avatar A2 corresponding to the second user U2. The coordinates are adjacent to the coordinates of the first avatar A1 corresponding to the first user U1 regardless of the position of the coordinates of .
 これにより、トリガーイベントの発生が認識された際には、第2ユーザU2は、第1ユーザU1とともに、第1ユーザU1に対応する第1アバターA1の座標に隣接する座標に基づいて決定された仮想空間VSの画像及び音声を認識することになる。 Thus, when the occurrence of the trigger event is recognized, the second user U2 is determined together with the first user U1 based on the coordinates adjacent to the coordinates of the first avatar A1 corresponding to the first user U1. Images and sounds in the virtual space VS are recognized.
 したがって、VRシステムSによれば、トリガーイベントの発生が認識された後には、第2ユーザU2は、第1ユーザU1に対応する第1アバターA1の座標に隣接する座標に基づいて決定された仮想空間VSの画像及び音声を認識することになるので、それまで仮想空間VSで異なる環境を体感していた第1ユーザ及び第2ユーザが、ほぼ同じ環境を体感することができるようになる。 Therefore, according to the VR system S, after the occurrence of the triggering event is recognized, the second user U2 appears in the virtual space determined based on coordinates adjacent to the coordinates of the first avatar A1 corresponding to the first user U1 Since the images and sounds of the space VS are recognized, the first user and the second user who have experienced different environments in the virtual space VS can now experience substantially the same environment.
 なお、本実施形態においては、第2ユーザU2が自らの環境を第1ユーザU1の環境に同期させる場合について説明した。しかし、本発明の仮想空間体感システムは、そのような構成に限定されるものではなく、トリガーイベントの発生が認識された際に第1ユーザ及び第2ユーザの一方の環境を、第1ユーザ及び第2ユーザの他方の環境に同期させるものであればよい。 In addition, in the present embodiment, a case has been described in which the second user U2 synchronizes his environment with the environment of the first user U1. However, the virtual space experience system of the present invention is not limited to such a configuration. Anything that synchronizes with the other environment of the second user may be used.
 そのため、例えば、本実施形態において、第1アバターA1が第2ユーザU2に対応する第2ゴーストG2に触れた場合に、第1ユーザU1の認識する画像及び音声を、第2アバターA2の座標に一致する座標又は隣接する座標を基準として決定するように構成してもよい。 Therefore, for example, in the present embodiment, when the first avatar A1 touches the second ghost G2 corresponding to the second user U2, the image and voice recognized by the first user U1 are moved to the coordinates of the second avatar A2. The determination may be based on matching coordinates or adjacent coordinates.
 また、例えば、本実施形態では、他のユーザである第1ユーザU1の環境に第2ユーザU2が自らの環境を同期させているが、自らの環境に他のユーザの環境を同期させるように構成してもよい。 Also, for example, in the present embodiment, the second user U2 synchronizes his environment with the environment of the first user U1 who is another user. may be configured.
 具体的には、例えば、本実施形態において、第1アバターA1が第2ユーザU2に対応する第2ゴーストG2に触れた場合に、第2ユーザU2の認識する画像及び音声を、第1アバターA1の座標に一致する座標又は隣接する座標を基準として決定するように構成してもよい。逆に、第2アバターA2が第1ユーザU1に対応する第1ゴーストG1に触れた場合に、第1ユーザU1の認識する画像及び音声を、第2アバターA2の座標に一致する座標又は隣接する座標を基準として決定するように構成してもよい。 Specifically, for example, in the present embodiment, when the first avatar A1 touches the second ghost G2 corresponding to the second user U2, the image and voice recognized by the second user U2 are transferred to the first avatar A1. may be determined based on coordinates that match or are adjacent to the coordinates of . Conversely, when the second avatar A2 touches the first ghost G1 corresponding to the first user U1, the image and sound recognized by the first user U1 are changed to coordinates matching the coordinates of the second avatar A2 or adjacent to the coordinates of the second avatar A2. It may be configured so as to be determined using the coordinates as a reference.
 なお、他のユーザの環境に自らの環境を同期させる処理を行う構成と、自らの環境に他のユーザの環境を同期させる処理を行う構成とを同時に実装する場合には、トリガーイベントとなる動作を、行われる処理に応じて異ならせればよい。 Note that when simultaneously implementing a configuration for synchronizing one's own environment with another user's environment and a configuration for performing processing for synchronizing another user's environment with one's own environment, an operation that becomes a trigger event should be different depending on the processing to be performed.
 具体的には、例えば、自らに対応するアバターの右手で他のユーザに対応するアバターに触れた場合には、他のユーザの環境に自らの環境を同期させる処理を実行し、自らに対応するアバターの左手で他のユーザに対応するアバターに触れた場合には、自らの環境に他のユーザの環境を同期させる処理を実行するというように構成するとよい。 Specifically, for example, when the avatar corresponding to the user touches the avatar corresponding to the other user with the right hand of the avatar corresponding to the user, a process of synchronizing the user's environment with the environment of the other user is executed, and the user can respond to the user itself. It is preferable to configure such that when an avatar corresponding to another user is touched with the avatar's left hand, a process of synchronizing the other user's environment with the own environment is executed.
 また、第1実施形態のように、座標を移動させて環境を同期させる処理を行う構成と、本実施形態のように、基準となる座標を変更して環境を同期させる処理を行う構成とを同時に実装する場合には、トリガーイベントとなる動作を、行われる処理に応じて異ならせればよい。 In addition, a configuration for performing processing for synchronizing the environments by moving the coordinates as in the first embodiment and a configuration for performing processing for synchronizing the environments by changing the reference coordinates as in the present embodiment may be combined. When implemented at the same time, the action that becomes the trigger event should be changed according to the process to be performed.
 具体的には、例えば、自らに対応するアバターで他のユーザに対応するアバターの手に触れた場合には、座標を移動させて環境を同期させる処理を行い、自らに対応するアバターで他のユーザに対応するアバターの肩に触れた場合には、基準となる座標を変更して環境を同期させる処理を行うように構成するよい。 Specifically, for example, when the avatar corresponding to the self touches the hand of the avatar corresponding to another user, the coordinates are moved to synchronize the environment, and the avatar corresponding to the self touches the hand of the avatar corresponding to the other user. When the shoulder of the avatar corresponding to the user is touched, it may be configured such that processing for synchronizing the environment is performed by changing the reference coordinates.
[その他の実施形態]
 以上、図示の実施形態について説明したが、本発明はこのような形態に限定されるものではない。
[Other embodiments]
Although the illustrated embodiment has been described above, the present invention is not limited to such a form.
 例えば、上記実施形態では、図4等に示すように、仮想空間体感システムであるVRシステムSは、現実空間RSの1つの部屋に共に存在する第1ユーザU1及び第2ユーザU2に対し、その部屋に対応する1つの仮想空間VSに、第1ユーザU1に対応する第1アバターA1及び第2ユーザU2に対応する第2アバターA2を介して、互いに存在すると認識させるものとして構成されている。 For example, in the above-described embodiment, as shown in FIG. 4 and the like, the VR system S, which is a virtual space experience system, provides a first user U1 and a second user U2 who are present together in one room in the real space RS. A single virtual space VS corresponding to a room is configured to recognize each other through a first avatar A1 corresponding to a first user U1 and a second avatar A2 corresponding to a second user U2.
 しかし、本発明の仮想空間体感システムは、そのような構成に限定されるものではなく、複数のユーザに対応するアバターが、同時に仮想空間に存在できるものであればよい。 However, the virtual space experience system of the present invention is not limited to such a configuration, as long as avatars corresponding to multiple users can exist in the virtual space at the same time.
 そのため、例えば、第1ユーザと第2ユーザとが、現実空間の異なる領域(例えば、各々の自室)に存在していてもよい。 Therefore, for example, the first user and the second user may exist in different areas of the physical space (eg, their own rooms).
 また、上記実施形態では、仮想空間VSをその仮想空間VSに対応する現実空間RSよりも広くなるように拡張させる処理(図5のフローチャートを参照して説明した処理)を実行した場合において、環境を同期させる処理(図8のフローチャート又は図11のフローチャートを参照して説明した処理)を実行した場合について説明した。 Further, in the above embodiment, when the processing for expanding the virtual space VS so as to be wider than the real space RS corresponding to the virtual space VS (the processing described with reference to the flowchart of FIG. 5) is executed, the environment are synchronized (the process described with reference to the flowchart of FIG. 8 or the flowchart of FIG. 11).
 これは、ユーザにそのように現実空間よりも拡張された仮想空間を認識させた場合には、各々のユーザが、異なる環境を体感する機会(ひいては、環境を共有したいと感じる機会)が増えるためである。 This is because when the user is made to perceive a virtual space that is expanded from the real space in this way, each user has more opportunities to experience different environments (and thus more opportunities to feel a desire to share the environment). is.
 しかし、本発明の仮想空間体感システムは、そのような仮想空間を拡張させる処理を実行した場合にのみ環境を同期させる処理を実行する構成に限定されるものではなく、各々のユーザが、自分以外のユーザも何らかの仮想空間を体験していることを認識できるように構成されていればよい。そのため、例えば、各々のユーザに対応するアバターが、同一の仮想空間に存在していなくてもよい。 However, the virtual space experience system of the present invention is not limited to the configuration in which the process of synchronizing the environment is executed only when the process of expanding the virtual space is executed. user can also recognize that they are experiencing some kind of virtual space. Therefore, for example, avatars corresponding to each user do not have to exist in the same virtual space.
 具体的には、買い物客である複数の第1ユーザに対応する複数の第1アバターが、仮想空間に存在する店舗に自由に存在するとともに、店員である第2ユーザに対応する第2アバターが、仮想空間に存在するコンシェルジュカウンターに存在するように構成されていてもよい。このとき、コンシェルジュカウンターの存在する仮想空間は、店舗の存在する仮想空間であってもよいし、店舗の存在する仮想空間とは別の仮想空間であってもよい。 Specifically, a plurality of first avatars corresponding to a plurality of first users who are shoppers freely exist in a store existing in the virtual space, and a second avatar corresponding to a second user who is a store clerk. , a concierge counter existing in the virtual space. At this time, the virtual space in which the concierge counter exists may be the virtual space in which the store exists, or may be a virtual space different from the virtual space in which the store exists.
 このように構成されている場合であって、上記第1実施形態のように一方のアバターそのものが他方のアバターの座標と一致する座標又は隣接する座標に移動するように構成されているときには、第1トリガーイベントの発生が認識された際には、第2アバターは、コンシェルジュカウンターから、複数の第1ユーザのうちその第1トリガーイベントを発生させた第1ユーザに対応する第1アバターの座標へと移動する。 In the case of such a configuration, when one avatar itself is configured to move to coordinates matching or adjacent to the coordinates of the other avatar as in the first embodiment, the first When the occurrence of one trigger event is recognized, the second avatar moves from the concierge counter to the coordinates of the first avatar corresponding to the first user who caused the first trigger event among the plurality of first users. and move.
 また、このように構成されている場合であって、上記第2実施形態のように一方のアバターの視点だけが他方のアバターの座標と一致する座標又は隣接する座標に移動するように構成されているときには、第1トリガーイベントの発生が認識された際には、第2アバターに対応する第2ユーザに表示される画像は、コンシェルジュカウンターの画像から、複数の第1ユーザのうちその第1トリガーイベントを発生させた第1ユーザに表示されている画像と同様の画像に変化する。 Also, in the case of such a configuration, only the viewpoint of one avatar is configured to move to coordinates that match or are adjacent to the coordinates of the other avatar as in the second embodiment. When the occurrence of the first trigger event is recognized, the image displayed to the second user corresponding to the second avatar is selected from the image of the concierge counter to display the first trigger of the plurality of first users. The image changes to be similar to the image displayed to the first user who caused the event.
 また、これらのように構成されている場合、第1トリガーイベントは、例えば、第1アバターが仮想空間に生成されたオブジェクトである呼び出しボタンを押す、又は、第2アバターが移動ボタンを押すといった動作等が該当する。 In addition, when configured as described above, the first trigger event is, for example, an action in which the first avatar presses a call button, which is an object generated in the virtual space, or an action in which the second avatar presses a move button. etc.
 また、上記実施形態では、第2ユーザU2に対応する第2アバターA2が、第1ユーザU1に対応し、第1ユーザU1の座標との対応関係にズレの生じた座標に位置する第1ゴーストG1に接触する動作を、第1トリガーイベントとして採用している。また、第1ユーザU1に対応する第1アバターA1が、第2ユーザU2に対応し、第2ユーザU2の座標との対応関係にズレの生じた座標に位置する第2ゴーストG2に接触する動作も、第1トリガーイベントとして採用し得るとしている。 Further, in the above embodiment, the second avatar A2 corresponding to the second user U2 corresponds to the first user U1, and is the first ghost located at the coordinates where the correspondence relation with the coordinates of the first user U1 is deviated. The action of touching G1 is taken as the first trigger event. In addition, the first avatar A1 corresponding to the first user U1 contacts the second ghost G2 corresponding to the second user U2 and located at coordinates that are deviated from the coordinates of the second user U2. can also be adopted as the first trigger event.
 これは、その動作によって、一方のユーザが、他方のユーザと環境を共有する第1トリガーイベントが発生することを、直感的に理解することができるようにするためである。ひいては、第1トリガーイベントを発生させるために、その第1トリガーイベントを発生させようとするユーザの仮想空間への没入感が阻害されてしまうことを抑制することができるようにするためである。 This is so that one user can intuitively understand that the action will cause a first trigger event that shares the environment with the other user. Furthermore, in order to generate the first trigger event, it is possible to prevent the user who is trying to generate the first trigger event from having a sense of immersion in the virtual space being hindered.
 しかし、本発明における第1トリガーイベントは、このような構成に限定されるものではなく、システム設計者等が適宜設定してよい。 However, the first trigger event in the present invention is not limited to such a configuration, and may be appropriately set by a system designer or the like.
 そのため、例えば、仮想空間に生成されたオブジェクト(例えば、移動スイッチ、呼び出しスイッチ等)に触れることによって、第1トリガーイベントが発生するように構成してもよい。また、アバターに特定の動作(例えば、手招きする動作等)を行わせた場合に、第1トリガーイベントが発生するように構成してもよい。 Therefore, for example, the first trigger event may be generated by touching an object (eg, a movement switch, a call switch, etc.) generated in the virtual space. Alternatively, the first trigger event may be generated when the avatar performs a specific action (eg, beckoning action).
 また、上記実施形態においては、トリガーイベントを、一方のユーザに対応するアバターが他方のユーザに対応するゴーストに触れる動作としている。しかし、本発明における第1トリガーイベントは、そのような構成に限定されるものではなく、一方のユーザに対応するアバターが他方のユーザに対応するゴーストに対して実行する所定の動作であればよい。 Also, in the above embodiment, the trigger event is the action of the avatar corresponding to one user touching the ghost corresponding to the other user. However, the first trigger event in the present invention is not limited to such a configuration, and may be a predetermined action performed by the avatar corresponding to one user on the ghost corresponding to the other user. .
 ここで、「所定の動作」としては、アバターがゴーストを基準として行う何らかの動作であればよい。例えば、本実施形態のようにアバターがゴーストに接触するような動作の他、アバターがゴーストを基準とした所定の範囲内に移動するような動作、アバターが仮想空間に存在するオブジェクトを操作する際に、その対象としてゴーストを選択するような動作(例えば、カメラ型オブジェクトでゴーストを撮影するような動作等)等が挙げられる。 Here, the "predetermined action" may be any action performed by the avatar based on the ghost. For example, in addition to the action in which the avatar contacts the ghost as in this embodiment, the action in which the avatar moves within a predetermined range based on the ghost, and when the avatar operates an object existing in the virtual space. Another example is an operation of selecting a ghost as a target (for example, an operation of photographing a ghost with a camera-type object, etc.).
 また、本実施形態においては、仮想空間を拡張させる処理の際に生成されたゴーストを、トリガーイベントを発生させるためのキーとして採用している。しかし、本発明におけるゴーストは、そのような構成に限定されるものではなく、ユーザに対応し、且つ、そのユーザの認識する画像及び音声の基準とはならないアバターであればよい。 Also, in this embodiment, the ghost generated during the process of expanding the virtual space is used as a key for generating the trigger event. However, the ghost in the present invention is not limited to such a configuration, and may be an avatar that corresponds to the user and does not serve as a reference for images and sounds recognized by the user.
 そのため、例えば、仮想空間を拡張する処理等の実行等に関係なく、最初から、ユーザに対応し、且つ、そのユーザの認識する画像及び音声の基準となるアバターとともに、そのアバターとは独立したアバターを生成し、その独立した(例えば、基準となるアバターの座標とは異なる座標に生成した)アバターをゴーストとして採用してもよい。 Therefore, for example, regardless of the execution of processing such as expanding the virtual space, from the beginning, together with the avatar that corresponds to the user and is the reference of the image and sound recognized by the user, an avatar that is independent of the avatar , and its independent avatar (for example, generated at coordinates different from the coordinates of the reference avatar) may be adopted as a ghost.
 具体的には、例えば、仮想空間で、ダンスの講師に対応するアバター、生徒に対応するアバター、及び、講師に対応するアバターの手元に存在し、生徒に対応したミニチュアのようなアバターを生成して、そのミニチュアのようなアバターを用いて、講師が生徒の状態を確認するような場合には、そのミニチュアのようなアバターを、ゴーストとして採用してもよい。 Specifically, for example, in a virtual space, an avatar corresponding to a dance instructor, an avatar corresponding to a student, and a miniature avatar corresponding to a student, which are present at the hands of the avatar corresponding to the instructor, are generated. In such a case, the miniature avatar may be employed as a ghost when the lecturer confirms the student's condition using the miniature avatar.
 そのように構成した場合には、講師が、生徒の状態を、客観的な自らの視点で観察するとともに、必要に応じて生徒の主観的な視点で観察することができるようになる。 With such a structure, the instructor will be able to observe the student's condition from his/her own objective point of view and, if necessary, from the student's subjective point of view.
1…標識、2…カメラ、3…サーバ、4…HMD、31…表示画像生成部、31a…仮想空間生成部、31b…アバター生成部、31c…移動体生成部、32…ユーザ情報認識部、32a…ユーザ姿勢認識部、32b…ユーザ座標認識部、33…トリガーイベント認識部、34…アバター座標決定部、35…仮想空間画像決定部、36…仮想空間音声決定部、41…モニタ(仮想空間画像表示器)、42…スピーカ(仮想空間音声発生器)、A…アバター、A1…第1アバター、A2…第2アバター、F1…第1フロア、F2…第2フロア、G1…第1ゴースト、G1a…第1情報ボード、G2…第2ゴースト、G2a…第2情報ボード、RS…現実空間、RS1…ホワイトボード、S…VRシステム、U…ユーザ、U1…第1ユーザ、U2…第2ユーザ、VS…仮想空間、VS1…エレベータ(移動体)、VS2…スイッチ、VS3…虹。 1... Sign, 2... Camera, 3... Server, 4... HMD, 31... Display image generator, 31a... Virtual space generator, 31b... Avatar generator, 31c... Moving object generator, 32... User information recognizer, 32a User posture recognition unit 32b User coordinate recognition unit 33 Trigger event recognition unit 34 Avatar coordinate determination unit 35 Virtual space image determination unit 36 Virtual space audio determination unit 41 Monitor (virtual space image display), 42... speaker (virtual space audio generator), A... avatar, A1... first avatar, A2... second avatar, F1... first floor, F2... second floor, G1... first ghost, G1a...first information board, G2...second ghost, G2a...second information board, RS...real space, RS1...whiteboard, S...VR system, U...user, U1...first user, U2...second user , VS... virtual space, VS1... elevator (moving object), VS2... switch, VS3... rainbow.

Claims (5)

  1.  第1ユーザ及び第2ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
     前記第1ユーザに対応する第1アバター、及び、前記第2ユーザに対応する第2アバターを、前記仮想空間に生成するアバター生成部と、
     前記現実空間における前記第1ユーザの座標及び前記第2ユーザの座標を認識するユーザ座標認識部と、
     前記第1ユーザの座標に基づいて、前記仮想空間における前記第1アバターの座標を決定し、前記第2ユーザの座標に基づいて、前記仮想空間における前記第2アバターの座標を決定するアバター座標決定部と、
     前記第1アバターの座標及び前記第2アバターの座標に基づいて、前記第1ユーザ及び前記第2ユーザに認識させる前記仮想空間の画像を決定する仮想空間画像決定部と、
     第1トリガーイベントの発生を認識するトリガーイベント認識部と、
     前記第1ユーザ及び前記第2ユーザに、前記仮想空間の画像を認識させる仮想空間画像表示器とを備えている仮想空間体感システムにおいて、
     前記アバター座標決定部は、前記第1トリガーイベントの発生が認識された際に、前記第1アバター及び前記第2アバターの一方の座標を、前記第1アバター及び前記第2アバターの他方の座標に一致する座標又は隣接する座標に移動させることを特徴とする仮想空間体感システム。
    a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist;
    an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space;
    a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space;
    Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user. Department and
    a virtual space image determination unit that determines an image of the virtual space to be recognized by the first user and the second user based on the coordinates of the first avatar and the coordinates of the second avatar;
    a trigger event recognition unit that recognizes the occurrence of the first trigger event;
    A virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
    The avatar coordinate determination unit converts the coordinates of one of the first avatar and the second avatar to the coordinates of the other of the first avatar and the second avatar when occurrence of the first trigger event is recognized. A virtual space experience system characterized by moving to matching coordinates or adjacent coordinates.
  2.  第1ユーザ及び第2ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
     前記第1ユーザに対応する第1アバター、及び、前記第2ユーザに対応する第2アバターを、前記仮想空間に生成するアバター生成部と、
     前記現実空間における前記第1ユーザの座標及び前記第2ユーザの座標を認識するユーザ座標認識部と、
     前記第1ユーザの座標に基づいて、前記仮想空間における前記第1アバターの座標を決定し、前記第2ユーザの座標に基づいて、前記仮想空間における前記第2アバターの座標を決定するアバター座標決定部と、
     前記第1アバターの座標及び前記第2アバターの座標に基づいて、前記第1ユーザ及び前記第2ユーザに認識させる前記仮想空間の画像を決定する仮想空間画像決定部と、
     第1トリガーイベントの発生を認識するトリガーイベント認識部と、
     前記第1ユーザ及び前記第2ユーザに、前記仮想空間の画像を認識させる仮想空間画像表示器とを備えている仮想空間体感システムにおいて、
     前記仮想空間画像決定部は、前記第1トリガーイベントの発生が認識された際に、前記第1ユーザ及び前記第2ユーザの一方に認識させる前記仮想空間の画像を、前記第1アバター及び前記第2アバターの一方の座標が前記第1アバター及び前記第2アバターの他方の座標に一致する座標又は隣接する座標に移動した場合における画像にすることを特徴とする仮想空間体感システム。
    a virtual space generation unit that generates a virtual space corresponding to the real space in which the first user and the second user exist;
    an avatar generation unit that generates a first avatar corresponding to the first user and a second avatar corresponding to the second user in the virtual space;
    a user coordinate recognition unit that recognizes the coordinates of the first user and the coordinates of the second user in the physical space;
    Determining the coordinates of the first avatar in the virtual space based on the coordinates of the first user, and determining the coordinates of the second avatar in the virtual space based on the coordinates of the second user. Department and
    a virtual space image determination unit that determines an image of the virtual space to be recognized by the first user and the second user based on the coordinates of the first avatar and the coordinates of the second avatar;
    a trigger event recognition unit that recognizes the occurrence of the first trigger event;
    A virtual space experience system comprising a virtual space image display that allows the first user and the second user to recognize an image of the virtual space,
    The virtual space image determining unit determines the image of the virtual space to be recognized by one of the first user and the second user when occurrence of the first trigger event is recognized. A virtual space bodily sensation system characterized in that an image is generated when the coordinates of one of two avatars are moved to coordinates that match or are adjacent to the coordinates of the other of the first and second avatars.
  3.  請求項1又は請求項2に記載の仮想空間体感システムにおいて、
     前記アバター生成部は、前記第1ユーザ及び前記第2ユーザの一方に対応し、前記第1アバター及び前記第2アバターとは独立したアバターであるゴーストを、前記仮想空間に生成し、
     前記アバター座標決定部は、前記第1ユーザ及び前記第2ユーザの一方に対応する前記第1アバター又は前記第2アバターの座標に所定のズレを生じさせた座標に基づいて、前記ゴーストの座標を決定し、
     前記仮想空間画像決定部は、前記第1ユーザ及び前記第2ユーザの他方に認識させる前記仮想空間の画像に、前記第1ユーザ及び前記第2ユーザの一方に対応している旨を示す情報を付加した前記ゴーストの画像を含ませ、
     前記第1トリガーイベントは、前記第1ユーザ及び前記第2ユーザの他方に対応する前記第1アバター又は前記第2アバターが前記ゴーストに対して実行する所定の動作であることを特徴とする仮想空間体感システム。
    In the virtual space sensation system according to claim 1 or claim 2,
    The avatar generation unit generates a ghost, which corresponds to one of the first user and the second user and is an avatar independent of the first avatar and the second avatar, in the virtual space;
    The avatar coordinate determining unit determines the coordinates of the ghost based on coordinates obtained by causing a predetermined deviation in the coordinates of the first avatar or the second avatar corresponding to one of the first user and the second user. decide and
    The virtual space image determination unit adds information indicating that the image of the virtual space to be recognized by the other of the first user and the second user corresponds to one of the first user and the second user. including an image of the added ghost,
    The virtual space, wherein the first trigger event is a predetermined action performed on the ghost by the first avatar or the second avatar corresponding to the other of the first user and the second user. sensory system.
  4.  請求項3に記載の仮想空間体感システムにおいて、
     前記トリガーイベント認識部は、第2トリガーイベントの発生を認識し、
     前記アバター生成部は、前記第2トリガーイベントの発生が認識された際に、前記第1ユーザに対応する前記ゴーストである第1ゴーストを、前記仮想空間に生成し、
     前記アバター座標決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記第1アバターの座標を決定し、前記第1ユーザの座標に基づいて、前記仮想空間における前記第1ゴーストの座標を決定し、
     前記仮想空間画像決定部は、前記第2トリガーイベントの発生が認識された後には、前記第2ユーザに認識させる前記仮想空間の画像に、前記第1ユーザに対応している旨を示す情報を付加した前記第1ゴーストの画像を含ませ、
     前記第1トリガーイベントは、前記第2アバターが前記第1ゴーストに対して実行する所定の動作であることを特徴とする仮想空間体感システム。
    In the virtual space experience system according to claim 3,
    The trigger event recognition unit recognizes occurrence of a second trigger event,
    The avatar generation unit generates a first ghost, which is the ghost corresponding to the first user, in the virtual space when occurrence of the second trigger event is recognized,
    After the occurrence of the second trigger event is recognized, the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user. , determining the coordinates of the first ghost in the virtual space based on the coordinates of the first user;
    After the occurrence of the second trigger event is recognized, the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the second user corresponds to the first user. including the image of the added first ghost;
    The virtual space experience system, wherein the first trigger event is a predetermined action performed by the second avatar on the first ghost.
  5.  請求項3又は請求項4に記載の仮想空間体感システムにおいて、
     前記トリガーイベント認識部は、第2トリガーイベントの発生を認識し、
     前記アバター生成部は、前記第2トリガーイベントの発生が認識された際に、前記第2ユーザに対応する前記ゴーストである第2ゴーストを、前記仮想空間に生成し、
     前記アバター座標決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記第1アバターの座標を決定し、前記第2ユーザの座標に前記所定のズレを生じさせた座標に基づいて、前記仮想空間における前記第2ゴーストの座標を決定し、
     前記仮想空間画像決定部は、前記第2トリガーイベントの発生が認識された後には、前記第1ユーザに認識させる前記仮想空間の画像に、前記第2ユーザに対応している旨を示す情報を付加した前記第2ゴーストの画像を含ませ、
     前記第1トリガーイベントは、前記第1アバターが前記第2ゴーストに対して実行する所定の動作であることを特徴とする仮想空間体感システム。
    In the virtual space sensation system according to claim 3 or claim 4,
    The trigger event recognition unit recognizes occurrence of a second trigger event,
    The avatar generation unit generates a second ghost, which is the ghost corresponding to the second user, in the virtual space when occurrence of the second trigger event is recognized,
    After the occurrence of the second trigger event is recognized, the avatar coordinate determining unit determines the coordinates of the first avatar based on the coordinates that cause the predetermined deviation from the coordinates of the first user. determining the coordinates of the second ghost in the virtual space based on the coordinates obtained by causing the predetermined deviation from the coordinates of the second user;
    After the occurrence of the second trigger event is recognized, the virtual space image determination unit adds information indicating that the virtual space image to be recognized by the first user corresponds to the second user. including the image of the added second ghost,
    The virtual space experience system, wherein the first trigger event is a predetermined action performed by the first avatar on the second ghost.
PCT/JP2021/020889 2021-06-01 2021-06-01 Virtual space experience system WO2022254585A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2021/020889 WO2022254585A1 (en) 2021-06-01 2021-06-01 Virtual space experience system
JP2021575439A JP7055527B1 (en) 2021-06-01 2021-06-01 Virtual space experience system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/020889 WO2022254585A1 (en) 2021-06-01 2021-06-01 Virtual space experience system

Publications (1)

Publication Number Publication Date
WO2022254585A1 true WO2022254585A1 (en) 2022-12-08

Family

ID=81289286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/020889 WO2022254585A1 (en) 2021-06-01 2021-06-01 Virtual space experience system

Country Status (2)

Country Link
JP (1) JP7055527B1 (en)
WO (1) WO2022254585A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019192178A (en) * 2018-04-27 2019-10-31 株式会社コロプラ Program, information processing device, and method
WO2021095175A1 (en) * 2019-11-13 2021-05-20 株式会社Abal Virtual space experience system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019192178A (en) * 2018-04-27 2019-10-31 株式会社コロプラ Program, information processing device, and method
WO2021095175A1 (en) * 2019-11-13 2021-05-20 株式会社Abal Virtual space experience system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Second Life Walking Handbook", vol. 51, 31 August 2007, SOTEC CO., LTD., JP, ISBN: 978-4-88166-592-3, article LYNE, JINN: "Passage; Handbook_How to walk in Second Life", pages: 44 - 45,51, XP009541803 *

Also Published As

Publication number Publication date
JP7055527B1 (en) 2022-04-18
JPWO2022254585A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
US11669152B2 (en) Massive simultaneous remote digital presence world
JP2010253277A (en) Method and system for controlling movements of objects in video game
JP2010257461A (en) Method and system for creating shared game space for networked game
CN111831104B (en) Head-mounted display system, related method and related computer readable recording medium
JP6936465B1 (en) Virtual space experience system
JP2018013901A (en) Virtual space simulation system
WO2022254585A1 (en) Virtual space experience system
JP6538012B2 (en) Virtual Space Experience System
WO2021240601A1 (en) Virtual space body sensation system
JP7138392B1 (en) Virtual space sensory system
JP6933850B1 (en) Virtual space experience system
JP6672508B2 (en) Virtual space experience system
JPWO2022254585A5 (en)

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021575439

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21944089

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21944089

Country of ref document: EP

Kind code of ref document: A1