WO2021240601A1 - Virtual space body sensation system - Google Patents

Virtual space body sensation system Download PDF

Info

Publication number
WO2021240601A1
WO2021240601A1 PCT/JP2020/020547 JP2020020547W WO2021240601A1 WO 2021240601 A1 WO2021240601 A1 WO 2021240601A1 JP 2020020547 W JP2020020547 W JP 2020020547W WO 2021240601 A1 WO2021240601 A1 WO 2021240601A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
state
virtual space
avatar
item
Prior art date
Application number
PCT/JP2020/020547
Other languages
French (fr)
Japanese (ja)
Inventor
良哉 尾小山
Original Assignee
株式会社Abal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Abal filed Critical 株式会社Abal
Priority to JP2021520432A priority Critical patent/JPWO2021240601A1/ja
Priority to PCT/JP2020/020547 priority patent/WO2021240601A1/en
Publication of WO2021240601A1 publication Critical patent/WO2021240601A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present invention relates to a virtual space experience system that allows a user to recognize that he / she exists in a virtual space.
  • a virtual space is generated by a server or the like, and an image of the virtual space and an avatar corresponding to the user in the virtual space are presented to the user via a head-mounted display (hereinafter, may be referred to as “HMD”).
  • HMD head-mounted display
  • a motion capture device or the like recognizes a user's state in the real space (for example, body movement, coordinate movement, posture change, etc.), and depending on the recognized state, Some change the state of the avatar (see, for example, Patent Document 1).
  • the state of the game pad type object which is the operation medium in the virtual space is changed according to the state of the user or the controller in the real space, and the state in the virtual space is changed according to the change.
  • the character to be operated is made to perform a predetermined action.
  • the present invention has been made in view of the above points, and an object of the present invention is to provide a virtual space experience system capable of manipulating objects in a virtual space without reducing the immersive feeling.
  • the virtual space experience system of the present invention is A virtual space generator that creates a virtual space that corresponds to the real space in which the user exists and includes an avatar and a predetermined object corresponding to the user.
  • a user state recognition unit that recognizes the user's state
  • An avatar control unit that controls the state of the avatar according to the recognized state of the user
  • An object control unit that controls the state of the predetermined object
  • An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the avatar and the predetermined object.
  • a virtual space experience system including an image display that causes the user to recognize a determined image of the virtual space.
  • the predetermined object is located at a coordinate different from that of the first object in the first object and the virtual space, has a shape corresponding to at least a part of the first object, and has a shape corresponding to at least a part of the first object, through the movement of the avatar.
  • the object control unit is characterized in that when the second object is operated by the user, the state of the first object is changed according to the change in the state of the second object due to the operation.
  • the "state” refers to something that can change according to the intention or operation of the user. It refers to the operating state of the body or mechanism, the position of coordinates, posture, direction, shape, color, size, etc. Therefore, the “change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the "corresponding shape” includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like. include.
  • the state of the first object is changed according to the change in the state of the second object due to the operation.
  • the first object is the operation target
  • the second object is the operation medium.
  • the second object has a shape corresponding to at least a part of the first object. Therefore, the user can perform the same operation on the second object, which is the operation medium, as on the first object, which is the actual operation target.
  • the object control unit keeps at least one of the coordinates, the posture, and the direction of the state of the first object fixed, and the second object is operated by the operation. It is preferable to change the other state of the first object according to the change of the state.
  • the user can change the other states in correspondence with the second object. It becomes easier to observe the state of the first object while freely moving the two objects. As a result, the user can easily grasp the change in the state of the first object.
  • the virtual space experience system of the present invention It is equipped with an item state recognition unit that recognizes the state of items existing in the real space. It is preferable that the virtual space generation unit generates the second object in correspondence with the recognized item.
  • the second object is made to correspond to the first object to be operated. Can be freely generated.
  • the immersive feeling of the user in the virtual space may be reduced.
  • the second object which is the operation medium (that is, the object touched by the user U)
  • the operation medium that is, the object touched by the user U
  • the second object has the same or similar shape as the first object and is smaller than the first object.
  • the user can easily operate the avatar with respect to the second object.
  • the block diagram which shows the structure of the processing part of the VR system of FIG. The schematic diagram of the virtual space generated in the VR system of FIG.
  • the VR system S which is a virtual space experience system according to the embodiment, will be described with reference to the drawings.
  • the VR system S is a system that allows the user to experience virtual reality (so-called VR (virtual reality)) by recognizing that the user himself / herself exists in the virtual space.
  • VR virtual reality
  • the VR system S has a plurality of markers 1 attached to the user U and the item I existing in the real space RS, and the user U and the item I (strictly speaking, the markers 1 attached to them).
  • the camera 2 that captures the image
  • the server 3 that determines the image and sound of the virtual space VS (see FIG. 3 and the like) to be recognized by the user U
  • the head-mounted display that causes the player to recognize the determined image and sound (hereinafter, "" It is called "HMD4").
  • the camera 2, the server 3, and the HMD 4 can wirelessly transmit and receive information to and from each other.
  • any one of them may be configured to be able to send and receive information to and from each other by wire.
  • those attached to the user U are attached to the head, both hands, and both feet of the user U via the HMD4, gloves, and shoes worn by the user U.
  • those attached to the item I are attached to the positions that are the feature points in the image of the item I taken by the camera 2.
  • a plurality of rectangular parallelepiped building blocks placed on the plate are adopted as the item I, they are attached to the edges, the vicinity of the corners, and the like.
  • the sign 1 is used to recognize the posture, coordinates, and direction of the user U or the item I in the real space RS as described later. Therefore, the mounting position of the sign 1 may be appropriately changed according to other devices constituting the VR system S.
  • the camera 2 multi-directions the operable range of the user U and the item I in the real space RS in which the user U and the item I exist (that is, a range in which the posture change, the coordinate movement, the direction change, etc. can be executed). It is installed so that you can shoot from.
  • the server 3 recognizes the sign 1 from the image taken by the camera 2, and recognizes the posture, coordinates, and direction of the user U or the item I based on the position of the recognized sign 1 in the real space RS. Further, the server 3 determines the image and the sound to be recognized by the user U based on the posture, the coordinates and the direction.
  • the HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 3, and a virtual space determined by the server 3. It has a speaker 41 (voice generator) for causing the user U to recognize the voice of VS.
  • a monitor 40 image display
  • speaker 41 voice generator
  • the VR system S When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS, and the user U himself exists in the virtual space as the avatar A. Be recognized. That is, the VR system S is configured as a so-called immersive system.
  • the VR system S includes a so-called motion capture device composed of a marker 1, a camera 2, and a server 3 as a system for recognizing the states of the user U and the item I in the real space RS.
  • the "state” refers to something that can change according to the intention or operation of the user U.
  • it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the virtual space experience system of the present invention is not limited to such a configuration.
  • a motion capture device in addition to the above configuration, one having a different number of signs and cameras from the above configuration (for example, one is provided for each) may be used.
  • a device other than the motion capture device may be used to recognize the state in the user's real space.
  • a sensor such as GPS may be mounted on the HMD, and the state of the player may be recognized based on the output from the sensor. Further, such a sensor may be used in combination with a motion capture device as described above.
  • the server 3 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like. As shown in FIG. 2, the server 3 has a virtual space generation unit 30, a user state recognition unit 31, an avatar control unit 32, and an item state recognition unit as functions realized by the implemented hardware configuration or program. 33, an object control unit 34, and an output information determination unit 35 are provided.
  • the virtual space generation unit 30 includes an image as a background of the virtual space VS corresponding to the real space RS in which the user U exists, an avatar 5 existing in the virtual space VS, and a predetermined object. Generate an image of. The virtual space generation unit 30 also generates sounds related to those images.
  • the movement range of the avatar 5 in the virtual space VS is the range corresponding to the room of the real space RS in which the user U exists. Therefore, the size of the image of the virtual space VS generated by the virtual space generation unit 30 (that is, the size of the virtual space VS) corresponds to the room of the real space RS.
  • the avatar 5 generated by the virtual space generation unit 30 is anthropomorphic to an animal, and operates in response to a human motion.
  • the virtual space generation unit 30 generates a plurality of avatars so as to correspond to each user U.
  • the predetermined object generated by the virtual space generation unit 30 includes a first object 6 and a second object 7 located at a coordinate different from that of the first object 6 in the virtual space VS. Specifically, the first object 6 is generated at a position far from the avatar 5 so that the user U can recognize the whole image through the avatar 5. The second object 7 is generated within the reach of the avatar 5.
  • the first object 6 is composed of the first building 61a, the second building 61b, and the third building 61c established on the base 60.
  • these buildings are collectively referred to as "building 61".
  • the second object 7 is composed of a first model 71a, a second model 71b, and a third model 71c mounted on a plate 70 corresponding to the base 60.
  • models 71 are collectively referred to as "model 71".
  • the second object 7 is a reduction of the first object 6 to the extent that it can be operated through the hand of the avatar 5. That is, the shape of the second object 7 is the same as that of the first object 6, but the size of the second object 7 is smaller than that of the first object 6.
  • the user state recognition unit 31 recognizes the state of the user U based on the image data of the user U including the sign 1 taken by the camera 2.
  • the user state recognition unit 31 has a user posture recognition unit 31a, a user coordinate recognition unit 31b, and a user direction recognition unit 31c.
  • the user posture recognition unit 31a, the user coordinate recognition unit 31b, and the user direction recognition unit 31c extract the marker 1 attached to the user U from the image data of the user U, and based on the extraction result, the posture of the user U, Recognize coordinates and directions.
  • the avatar control unit 32 controls the state of the avatar 5 (specifically, the change in posture, coordinates, and direction) according to the change in the state of the user U recognized by the user state recognition unit 31.
  • the item state recognition unit 33 recognizes the state of the item I based on the image data of the item I including the sign 1 taken by the camera 2.
  • the item state recognition unit 33 has an item posture recognition unit 33a, an item coordinate recognition unit 33b, and an item direction recognition unit 33c.
  • the item posture recognition unit 33a, the item coordinate recognition unit 33b, and the item direction recognition unit 33c extract the marker 1 attached to the item I from the image data of the item I, and based on the extraction result, the posture of the item I, Recognize coordinates and directions.
  • the object control unit 34 changes the states of the first object 6 and the second object 7 according to the operation of the avatar 5 recognized by the avatar control unit 32 and the state of the item I recognized by the item state recognition unit 33. Control.
  • the object control unit 34 has a first object control unit 34a and a second object control unit 34b.
  • the second object control unit 34b controls the state of the second object 7 according to the operation of the avatar 5 or the change of the state of the item I.
  • the first object control unit 34a controls the state of the first object 6 according to the change of the state of the second object 7.
  • the posture, coordinates, and direction of the second object 7 correspond to the posture, coordinates, and direction of the item I. Further, the posture (direction of each building 61) and coordinates of the first object 6 correspond to the posture (direction of each model 71) and coordinates of the second object 7. On the other hand, the direction of the first object 6 (phase around the yaw axis of the base 60) is fixed and always constant regardless of the direction of the second object 7 (phase around the yaw axis of the plate 70) (FIG. 5, FIG. 7 and 8).
  • the output information determination unit 35 determines the information regarding the virtual space VS to be recognized by the user U via the HMD4.
  • the output information determination unit 35 has an image determination unit 35a and an audio determination unit 35b.
  • the image determination unit 35a determines an image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the monitor 40 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. do.
  • the voice determination unit 35b is related to the image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the speaker 41 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. Determine the voice to be played.
  • each processing unit constituting the virtual space experience system of the present invention is not limited to the above configuration.
  • a part of the processing unit provided in the server 3 in the present embodiment may be provided in the HMD 4.
  • a plurality of servers may be used, or the servers may be omitted and the CPUs mounted on the HMD may be linked to each other.
  • a speaker other than the speaker mounted on the HMD may be provided. Further, in addition to the device that affects the visual sense and the auditory sense, the device that affects the sense of smell and the sense of touch that causes odor, wind, etc. according to the virtual space may be included.
  • the virtual space generation unit 30 of the server 3 generates the virtual space VS, the avatar 5 to exist in the virtual space VS, and various objects (FIG. 4 / STEP101).
  • the virtual space generation unit 30 generates an image as a background of the virtual space VS. Further, the virtual space generation unit 30 generates an image of the avatar 5 existing in the virtual space VS based on the image of the user U taken by the camera 2. Further, the virtual space generation unit 30 generates images of the first object 6 and the second object 7 based on the image of the item I taken by the camera 2.
  • the avatar control unit 32 of the server 3 determines the state of the avatar 5 in the virtual space VS based on the state of the user U in the real space RS (FIG. 4 / STEP102).
  • the user state recognition unit 31 recognizes the state of the user U in the real space RS based on the state of the user U taken by the camera 2.
  • the avatar control unit 32 determines the state of the avatar 5 based on the recognized state of the user U.
  • the object control unit 34 of the server 3 determines the states of the first object 6 and the second object 7 in the virtual space VS based on the state of the item I in the real space RS (FIG. 4 / STEP103).
  • the item state recognition unit 33 recognizes the state of the item I in the real space RS based on the state of the item I taken by the camera 2. After that, the first object control unit 34a of the object control unit 34 determines the state of the first object 6 based on the recognized state of the item I. Further, the second object control unit 34b of the object control unit 34 determines the state of the second object 7 based on the recognized state of the item I.
  • the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 4 / STEP104).
  • the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 4 / STEP105), and ends the current process. do.
  • the user U has the first object 6 to be operated at a position where the whole image can be seen, and the second object 7 which is an operation medium has a second object 7.
  • the virtual space VS which exists within the reach of the avatar 5, is in a state of being recognized as if it exists.
  • the first object 6 to be operated in the present embodiment is composed of a base 60 and a plurality of buildings 61 established on the base 60. Further, the second object 7 is mounted on a plate 70 having the same shape or a similar shape as the base 60, and a model 71 of a plurality of buildings having the same shape or a similar shape as the building 61. It is composed of and.
  • the "operation" for the second object 7 means the rotation of the entire second object 7 around the yaw axis by rotating the plate 70, and the plate by changing the coordinates and posture (direction with respect to the plate 70) of the model 71. 70 Refers to the layout change on.
  • the user state recognition unit 31 determines whether or not the state of the user U has changed (FIG. 6 / STEP201).
  • the avatar control unit 32 changes the state of the avatar 5 based on the change in the state of the user U (FIG. 6 / STEP202).
  • the object control unit 34 determines whether or not the change in the state of the avatar 5 (that is, the operation) executed by the avatar control unit 32 is an operation such as operating the second object 7 which is an operation medium. (Fig. 6 / STEP203).
  • the object control unit 34 determines whether or not the change in the posture, coordinates, and direction of the avatar 5 with respect to the second object 7 in the virtual space VS corresponds to the change in the predetermined posture, coordinates, and direction. It is determined whether or not the operation of the avatar 5 is an operation such as operating the second object 7 which is an operation medium.
  • the second object control unit 34b of the object control unit 34 controls the second object based on the operation by the avatar 5.
  • the posture, coordinates, and direction of No. 7 are changed (FIG. 6 / STEP204).
  • the first object control unit 34a of the object control unit 34 changes the posture and coordinates of the first object 6 based on the change of the posture and coordinates of the second object 7 (FIG. 6 / STEP205).
  • the second object 7 is operated by the avatar 5, as shown in FIG. It is assumed that the positions of the first model 71a and the positions of the second model 71b included in the second object 7 are replaced, and their orientations are adjusted. That is, it is assumed that the coordinates and postures (positions or orientations thereof with respect to the plate 70) of the first model 71a and the second model 71b are changed.
  • the coordinates and orientations of 61b change from the state shown in FIG. 5 to the state shown in FIG. 7.
  • the direction (yaw) of the plate 70 included in the second object 7 is as shown in FIG. 8 from the state before the operation medium, the second object 7, is operated by the avatar 5. It is assumed that the (phase around the axis) is changed to change the direction of the entire second object 7, and then the coordinates of the third model 71c are changed.
  • the direction (phase around the yaw axis) of the base 60 corresponding to the plate 70 is the direction of the plate 70 of the second object 7, regardless of the change in the direction of the plate 70. It does not change from the state shown in the above, and only the coordinates of the third building 61c corresponding to the third model 71c (the position relative to the base 60 or the first building 61a or the second building 61b) change. That is, the direction of the first object 6 with respect to the user U does not change.
  • the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 6 / STEP206).
  • the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are used by the user U.
  • the image and sound to be recognized by the server are determined based on the states of the avatar 5, the first object 6, and the second object 7 in the virtual space VS (FIG. 6 / STEP206).
  • the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 6 / STEP207), and ends the current process. do.
  • the VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
  • the first object is changed according to the change in the state of the second object 7 due to the operation. It is configured to change the state of 6. That is, the first object 6 is the operation target, and the second object 7 is the operation medium.
  • the second object 7 has a shape corresponding to the first object 6. Therefore, the user U can perform the same operation on the second object 7, which is the operation medium, as on the first object 6, which is the actual operation target.
  • the direction of the first object 6 and the direction of the second object 7 are completely synchronized, the direction of the first object 6 is frequently switched, and the user U can use the user U. On the contrary, it may be difficult to grasp the change in the state of the first object 6 as a whole.
  • the posture and coordinates of the state of the first object 6 are configured to be changed according to the second object 7 while the direction is fixed. This makes it easier for the user U to observe the state of the first object 6 while freely moving the second object 7. As a result, the user U can easily grasp the change in the state of the first object 6.
  • the user U sets the avatar 5 corresponding to the user U himself. 2 It may be difficult to operate on the object 7.
  • the second object 7 has the same shape as or similar to that of the first object 6, and is smaller than the first object 6 so that it can be operated by the hand of the avatar 5. Has been done. As a result, the user U can easily operate the avatar 5 with respect to the second object 7.
  • the shape of the second object 7 which is the operation medium is the same as that of the first object 6 which is the operation target, but the size of the second object 7 is smaller than that of the first object 6. ing.
  • the second object of the present invention is not limited to such a configuration, is located at coordinates different from the first object in the virtual space, and has a shape corresponding to at least a part of the first object. It suffices as long as it can be operated by the user through the operation of the avatar.
  • the "corresponding shape” includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like.
  • the first object and the second object may have the same size as well as the shape.
  • the second object may be generated by imitating only a part of the first object.
  • the first object may be the entire car, and the second object may be only the part for designing the car (for example, the internal structure of the bonnet).
  • one second object 7 which is an operation medium is generated for one first object 6 which is an operation target.
  • the object of the present invention is not limited to such a configuration.
  • a plurality of second objects as operation media may be generated for six first objects to be operated.
  • only one second object may be generated for a plurality of first objects.
  • the posture and coordinates of the first object 6 which is the operation target are changed according to the change of the posture and coordinates of the second object 7 which is the operation medium.
  • the direction of the first object 6 is fixed regardless of the change in the direction of the second object 7.
  • the change in the state of the first object in the present invention is not limited to such a configuration, and may be any one corresponding to the change in the state of the second object.
  • the "state” in the present invention refers to a state that can change according to the intention or operation of the user U.
  • it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the direction of the first object 6 may also be changed according to the change in the state of the second object 7.
  • the shape, color, size, etc. of the first object may be changed according to the state of the second object.
  • the changes do not necessarily have to be in perfect agreement.
  • the state of the first object may be changed later than the change of the state of the second object.
  • only a part of the plurality of states may be changed in correspondence with each other. It should be noted that, by the user's operation by the operation of the avatar or the like, it may be possible to switch between the case where only a part is made to correspond and the case where the change is made and the case where the whole is made to correspond.
  • the virtual space generation unit 30 generates the second object 7 corresponding to the recognized item I, and the second object control unit 34b is the item I recognized by the item state recognition unit 33.
  • the state of the second object 7 is controlled according to the state of.
  • the virtual space experience system of the present invention is not limited to such a configuration, and the item corresponding to the second object may not exist.
  • the second object which is the operation medium, can be freely generated in correspondence with the first object, which is the operation target.
  • control of the first object and the second object cannot be performed based on the state of the item as in the above embodiment, so that the control thereof is the operation of the avatar (and by extension, the avatar). It suffices to control only according to the change in the state of the user, which is the basis of the control of the above.
  • speaker voice generator
  • 60 base, 61 ... building, 61a ... first building, 61b ... second building, 61c ... third building, 70 ... plate, 71 ... model, 71a ... first model, 71b ... 2nd model, 71c ... 3rd model, I ... item, U ... user, RS ... real space, S ... VR system (virtual space experience system), VS ... virtual space.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is a virtual space body sensation system with which it is possible to operate an object in a virtual space without reducing the feeling of immersion. A VR system S comprises: an avatar control unit 32 that controls the state of an avatar in the virtual space in accordance with the state of a user; and an object control unit 34 that controls a first object and a second object in the virtual space. The second object is positioned at different coordinates than the first object in the virtual space, has a shape corresponding to the first object, and can be operated by the user via the actions of the avatar. The object control unit 34 changes the state of the first object in accordance with changes in the state of the second object according to an operation by the user.

Description

仮想空間体感システムVirtual space experience system
 本発明は、仮想空間に自分自身が存在しているようにユーザに認識させる仮想空間体感システムに関する。 The present invention relates to a virtual space experience system that allows a user to recognize that he / she exists in a virtual space.
 従来、サーバ等で仮想空間を生成し、ヘッドマウントディスプレイ(以下、「HMD」ということがある。)を介して、ユーザに、その仮想空間の画像、及び、その仮想空間でユーザに対応するアバターの画像を認識させて、ユーザ自身がその仮想空間に存在していると認識させて、仮想現実を体感させる仮想空間体感システムがある。 Conventionally, a virtual space is generated by a server or the like, and an image of the virtual space and an avatar corresponding to the user in the virtual space are presented to the user via a head-mounted display (hereinafter, may be referred to as “HMD”). There is a virtual space experience system that allows the user to recognize the image of the server and recognize that the user himself / herself exists in the virtual space to experience the virtual reality.
 この種の仮想空間体感システムとしては、モーションキャプチャー装置等によって、現実空間におけるユーザの状態(例えば、身体の動作、座標の移動、姿勢の変化等)を認識し、その認識した状態に応じて、アバターの状態を変化させるものがある(例えば、特許文献1参照)。 In this kind of virtual space experience system, a motion capture device or the like recognizes a user's state in the real space (for example, body movement, coordinate movement, posture change, etc.), and depending on the recognized state, Some change the state of the avatar (see, for example, Patent Document 1).
 特許文献1の仮想空間体感システムでは、現実空間におけるユーザ又はコントローラの状態に応じて、仮想空間における操作媒体であるゲームパッド型のオブジェクトの状態を変化させて、その変化に応じて、仮想空間における操作対象であるキャラクターに所定の動作を行わせている。 In the virtual space experience system of Patent Document 1, the state of the game pad type object which is the operation medium in the virtual space is changed according to the state of the user or the controller in the real space, and the state in the virtual space is changed according to the change. The character to be operated is made to perform a predetermined action.
特開2019-033881号公報Japanese Unexamined Patent Publication No. 2019-033881
 ところで、特許文献1に記載のシステムでは、操作媒体としてゲームパッド型のオブジェクトを採用しているので、そのオブジェクトを介して操作される操作対象(キャラクター)の動作は、そのゲームパッドの形状・機能に応じて単純化され、制限されたものとなってしまう。 By the way, in the system described in Patent Document 1, since a game pad type object is adopted as an operation medium, the operation of the operation target (character) operated via the object is the shape and function of the game pad. It will be simplified and restricted accordingly.
 その結果、操作対象となるオブジェクトに対する直感的な操作が阻害されているかのような印象をユーザに与えてしまい、ユーザの仮想空間への没入感を低減させてしまうおそれがあった。 As a result, it gives the user the impression that the intuitive operation of the object to be operated is hindered, and there is a risk that the user's immersive feeling in the virtual space may be reduced.
 本発明は以上の点に鑑みてなされたものであり、没入感を低減させずに、仮想空間のオブジェクトを操作可能な仮想空間体感システムを提供することを目的とする。 The present invention has been made in view of the above points, and an object of the present invention is to provide a virtual space experience system capable of manipulating objects in a virtual space without reducing the immersive feeling.
 本発明の仮想空間体感システムは、
 ユーザが存在している現実空間に対応し、前記ユーザに対応するアバター及び所定のオブジェクトを含む仮想空間を生成する仮想空間生成部と、
 前記ユーザの状態を認識するユーザ状態認識部と、
 認識された前記ユーザの状態に応じて、前記アバターの状態を制御するアバター制御部と、
 前記所定のオブジェクトの状態を制御するオブジェクト制御部と、
 前記アバター及び前記所定のオブジェクトの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
 前記ユーザに、決定された前記仮想空間の画像を認識させる画像表示器とを備えている仮想空間体感システムであって、
 前記所定のオブジェクトは、第1オブジェクト、及び、前記仮想空間で前記第1オブジェクトと異なる座標に位置し、前記第1オブジェクトの少なくとも一部に対応した形状を有し、前記アバターの動作を介して前記ユーザが操作可能な第2オブジェクトを含み、
 前記オブジェクト制御部は、前記第2オブジェクトが前記ユーザに操作された際に、該操作による前記第2オブジェクトの状態の変化に応じて、前記第1オブジェクトの状態を変化させることを特徴とする。
The virtual space experience system of the present invention is
A virtual space generator that creates a virtual space that corresponds to the real space in which the user exists and includes an avatar and a predetermined object corresponding to the user.
A user state recognition unit that recognizes the user's state,
An avatar control unit that controls the state of the avatar according to the recognized state of the user,
An object control unit that controls the state of the predetermined object,
An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the avatar and the predetermined object.
A virtual space experience system including an image display that causes the user to recognize a determined image of the virtual space.
The predetermined object is located at a coordinate different from that of the first object in the first object and the virtual space, has a shape corresponding to at least a part of the first object, and has a shape corresponding to at least a part of the first object, through the movement of the avatar. Contains a second object that can be manipulated by the user
The object control unit is characterized in that when the second object is operated by the user, the state of the first object is changed according to the change in the state of the second object due to the operation.
 ここで、「状態」とは、ユーザの意図又は操作に応じて変化し得るものを指す。身体又は機構の動作状態、座標の位置、姿勢、方向、形状、色、大きさ等を指す。そのため、状態の「変化」とは、動作の開始、進行及び停止、座標の移動、姿勢、方向、形状、色、大きさの変化等を指す。 Here, the "state" refers to something that can change according to the intention or operation of the user. It refers to the operating state of the body or mechanism, the position of coordinates, posture, direction, shape, color, size, etc. Therefore, the "change" of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
 また、ここで、「対応した形状」とは、同一の形状の他、相似した形(すなわち、同一の形状でサイズが異なる形状)、デフォルメした形、一部を切り出したものと同じ形状等を含む。 Further, here, the "corresponding shape" includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like. include.
 このように、本発明の仮想空体感システムでは、第2オブジェクトがアバターの動作を介してユーザに操作された際に、その操作による第2オブジェクトの状態の変化に応じて、第1オブジェクトの状態が変化するように構成されている。すなわち、第1オブジェクトを操作対象とし、第2オブジェクトをその操作媒体としている。 As described above, in the virtual sky experience system of the present invention, when the second object is operated by the user through the movement of the avatar, the state of the first object is changed according to the change in the state of the second object due to the operation. Is configured to change. That is, the first object is the operation target, and the second object is the operation medium.
 ここで、第2オブジェクトは、第1オブジェクトの少なくとも一部に対応した形状を有している。そのため、ユーザは、操作媒体である第2オブジェクトに対して、実際の操作対象である第1オブジェクトに対するものと同様の操作を行うことができる。 Here, the second object has a shape corresponding to at least a part of the first object. Therefore, the user can perform the same operation on the second object, which is the operation medium, as on the first object, which is the actual operation target.
 これにより、第1オブジェクトを直接的に操作しているような印象を、ユーザに与えることができる。ひいては、操作対象となる第1オブジェクトに対する直感的な操作を阻害されているような印象をユーザに与えてしまうことを抑制することができる。その結果、ユーザの仮想空間への没入感を低減させてしまうことを抑制することができる。 This gives the user the impression that the first object is being directly operated. As a result, it is possible to prevent the user from giving an impression that the intuitive operation of the first object to be operated is hindered. As a result, it is possible to suppress the user's feeling of immersion in the virtual space.
 また、本発明の仮想空間体感システムにおいては、
 前記オブジェクト制御部は、前記第2オブジェクトが前記ユーザに操作された際に、前記第1オブジェクトの状態のうち座標、姿勢及び方向の少なくとも1つを固定したまま、該操作による前記第2オブジェクトの状態の変化に応じて、前記第1オブジェクトの他の状態を変化させることが好ましい。
Further, in the virtual space experience system of the present invention,
When the second object is operated by the user, the object control unit keeps at least one of the coordinates, the posture, and the direction of the state of the first object fixed, and the second object is operated by the operation. It is preferable to change the other state of the first object according to the change of the state.
 操作対象となる第1オブジェクトの状態と、操作媒体となる第2オブジェクトの状態とを、完全に同期させてしまうと、ユーザが第1オブジェクトの状態の変化を、逆に把握しにくくなる場合がある。 If the state of the first object to be operated and the state of the second object to be the operation medium are completely synchronized, it may be difficult for the user to grasp the change in the state of the first object. be.
 そこで、このように、第1オブジェクトの状態のうちオブジェクトの座標、姿勢及び方向の少なくとも1つを固定したまま、他の状態を第2オブジェクトに対応させて変化させるようにすると、ユーザは、第2オブジェクトを自由に動かしつつ、第1オブジェクトの様子を観察しやすくなる。これにより、ユーザは、第1オブジェクトの状態の変化を容易に把握することができる。 Therefore, in this way, while fixing at least one of the coordinates, postures, and directions of the object among the states of the first object, the user can change the other states in correspondence with the second object. It becomes easier to observe the state of the first object while freely moving the two objects. As a result, the user can easily grasp the change in the state of the first object.
 また、本発明の仮想空間体感システムにおいては、
 前記現実空間に存在しているアイテムの状態を認識するアイテム状態認識部を備え、
 前記仮想空間生成部は、認識された前記アイテムに対応させて前記第2オブジェクトを生成することが好ましい。
Further, in the virtual space experience system of the present invention,
It is equipped with an item state recognition unit that recognizes the state of items existing in the real space.
It is preferable that the virtual space generation unit generates the second object in correspondence with the recognized item.
 現実空間に操作媒体となる第2オブジェクトに対応するアイテム(例えば、第1オブジェクトに対応した形状の模型等)がない場合には、第2オブジェクトを、操作対象となる第1オブジェクトに対応させて自由に生成できる。一方で、ユーザに触感を与えることができないので、ユーザの仮想空間への没入感を低減させてしまうおそれがある。 If there is no item corresponding to the second object as the operation medium in the real space (for example, a model having a shape corresponding to the first object), the second object is made to correspond to the first object to be operated. Can be freely generated. On the other hand, since it is not possible to give a tactile sensation to the user, there is a possibility that the immersive feeling of the user in the virtual space may be reduced.
 そこで、このように、操作媒体(すなわち、ユーザUが触る対象)である第2オブジェクトを現実空間に存在しているアイテムに対応させて生成するようにすると、ユーザに触感を与えることができるので、ユーザの仮想空間への没入感を高めることができる。 Therefore, if the second object, which is the operation medium (that is, the object touched by the user U), is generated corresponding to the item existing in the real space in this way, the user can be given a tactile sensation. , It is possible to enhance the user's immersive feeling in the virtual space.
 また、本発明の仮想空間体感システムにおいては、
 前記第2オブジェクトは、前記第1オブジェクトと同一の形状又は相似した形状を有し、且つ、前記第1オブジェクトよりも小さいことが好ましい。
Further, in the virtual space experience system of the present invention,
It is preferable that the second object has the same or similar shape as the first object and is smaller than the first object.
 第1オブジェクトが大きいものである場合、第2オブジェクトが同様の大きさであると、ユーザがユーザ自身に対応するアバターを第2オブジェクトに対して動作させにくい場合がある。そこで、このように構成すると、ユーザがアバターを第2オブジェクトに対して容易に動作させることができるようになる。 When the first object is large and the second object is the same size, it may be difficult for the user to operate the avatar corresponding to the user himself / herself with respect to the second object. Therefore, with this configuration, the user can easily operate the avatar with respect to the second object.
実施形態に係るVRシステムの概略構成を示す模式図。The schematic diagram which shows the schematic structure of the VR system which concerns on embodiment. 図1のVRシステムの処理部の構成を示すブロック図。The block diagram which shows the structure of the processing part of the VR system of FIG. 図1のVRシステムにおいて生成される仮想空間の模式図。The schematic diagram of the virtual space generated in the VR system of FIG. 図1のVRシステムが使用を開始された際に実行する処理を示すフローチャート。The flowchart which shows the process to execute when the VR system of FIG. 1 is started to use. 図1のVRシステムにおいてユーザに認識させる仮想空間の模式図であって、第2オブジェクトを操作する前の状態を示す図。It is a schematic diagram of the virtual space to be recognized by the user in the VR system of FIG. 1, and is the figure which shows the state before operating the 2nd object. 図1のVRシステムがオブジェクトが操作された際に実行する処理を示すフローチャート。The flowchart which shows the process which the VR system of FIG. 1 executes when an object is manipulated. 図1のVRシステムにおいてユーザに認識させる仮想空間の模式図であって、第2オブジェクトの座標を変化させる操作を行った状態を示す図。It is a schematic diagram of the virtual space to be recognized by the user in the VR system of FIG. 1, and is the figure which shows the state which performed the operation which changed the coordinates of the 2nd object. 図1のVRシステムにおいてユーザに認識させる仮想空間の模式図であって、第2オブジェクトの方向を変化させる操作を行った状態を示す図。It is a schematic diagram of the virtual space to be recognized by the user in the VR system of FIG. 1, and is the figure which shows the state which performed the operation which changes the direction of the 2nd object.
 以下、図面を参照して、実施形態に係る仮想空間体感システムであるVRシステムSについて説明する。VRシステムSは、仮想空間にユーザ自身が存在していると認識させて、ユーザに仮想現実(いわゆるVR(virtual reality))を体感させるシステムである。 Hereinafter, the VR system S, which is a virtual space experience system according to the embodiment, will be described with reference to the drawings. The VR system S is a system that allows the user to experience virtual reality (so-called VR (virtual reality)) by recognizing that the user himself / herself exists in the virtual space.
[システムの概略構成]
 まず、図1及び図2を参照して、VRシステムSの概略構成について説明する。
[Overview of system configuration]
First, a schematic configuration of the VR system S will be described with reference to FIGS. 1 and 2.
 図1に示すように、VRシステムSは、現実空間RSに存在するユーザU及びアイテムIに取り付けられる複数の標識1と、ユーザU及びアイテムI(厳密には、それらに取り付けられた標識1)を撮影するカメラ2と、ユーザUに認識させる仮想空間VS(図3等参照)の画像及び音声を決定するサーバ3と、決定された画像及び音声をプレイヤーに認識させるヘッドマウントディスプレイ(以下、「HMD4」という。)とを備えている。 As shown in FIG. 1, the VR system S has a plurality of markers 1 attached to the user U and the item I existing in the real space RS, and the user U and the item I (strictly speaking, the markers 1 attached to them). The camera 2 that captures the image, the server 3 that determines the image and sound of the virtual space VS (see FIG. 3 and the like) to be recognized by the user U, and the head-mounted display that causes the player to recognize the determined image and sound (hereinafter, "" It is called "HMD4").
 VRシステムSでは、カメラ2、サーバ3及びHMD4は、無線で相互に情報を送受信可能となっている。ただし、それらのいずれか同士を有線で相互に情報を送受信可能に構成してもよい。 In the VR system S, the camera 2, the server 3, and the HMD 4 can wirelessly transmit and receive information to and from each other. However, any one of them may be configured to be able to send and receive information to and from each other by wire.
 複数の標識1のうちユーザUに取り付けられているものは、ユーザUの装着するHMD4、手袋及び靴を介して、ユーザUの頭部、両手及び両足のそれぞれに取り付けられている。 Of the plurality of signs 1, those attached to the user U are attached to the head, both hands, and both feet of the user U via the HMD4, gloves, and shoes worn by the user U.
 また、複数の標識1のうちアイテムIに取り付けられているものは、カメラ2で撮影したアイテムIの画像において特徴点となる位置に取り付けられている。本実施形態では、アイテムIとして、プレートに載置された複数の直方体状の積み木を採用しているので、それらの縁、角の近傍等に取り付けられている。 Further, among the plurality of signs 1, those attached to the item I are attached to the positions that are the feature points in the image of the item I taken by the camera 2. In the present embodiment, since a plurality of rectangular parallelepiped building blocks placed on the plate are adopted as the item I, they are attached to the edges, the vicinity of the corners, and the like.
 なお、標識1は、後述するようにユーザU又はアイテムIの現実空間RSにおける姿勢、座標及び方向を認識するために用いられるものである。そのため、標識1の取り付け位置は、VRシステムSを構成する他の機器に応じて、適宜変更してよい。 Note that the sign 1 is used to recognize the posture, coordinates, and direction of the user U or the item I in the real space RS as described later. Therefore, the mounting position of the sign 1 may be appropriately changed according to other devices constituting the VR system S.
 カメラ2は、ユーザU及びアイテムIの存在する現実空間RSにおけるユーザU及びアイテムIの動作可能範囲(すなわち、姿勢の変化、座標の移動、方向の変化等を実行し得る範囲)を、多方向から撮影可能なように設置されている。 The camera 2 multi-directions the operable range of the user U and the item I in the real space RS in which the user U and the item I exist (that is, a range in which the posture change, the coordinate movement, the direction change, etc. can be executed). It is installed so that you can shoot from.
 サーバ3は、カメラ2が撮影した画像から標識1を認識し、その認識された標識1の現実空間RSにおける位置に基づいて、ユーザU又はアイテムIの姿勢、座標及び方向を認識する。また、サーバ3は、その姿勢、座標及び方向に基づいて、ユーザUに認識させる画像及び音声を決定する。 The server 3 recognizes the sign 1 from the image taken by the camera 2, and recognizes the posture, coordinates, and direction of the user U or the item I based on the position of the recognized sign 1 in the real space RS. Further, the server 3 determines the image and the sound to be recognized by the user U based on the posture, the coordinates and the direction.
 HMD4は、ユーザUの頭部に装着される。図2に示すように、HMD4は、ユーザUに、サーバ3によって決定された仮想空間VSの画像をユーザUの認識させるためのモニタ40(画像表示器)と、サーバ3によって決定された仮想空間VSの音声をユーザUに認識させるためのスピーカ41(音声発生器)とを有している。 The HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 3, and a virtual space determined by the server 3. It has a speaker 41 (voice generator) for causing the user U to recognize the voice of VS.
 このVRシステムSを用いてユーザUに仮想現実を体感させる場合、ユーザUは、仮想空間VSの画像と音声のみを認識させられて、ユーザU自身がアバターAとして仮想空間に存在していると認識させられる。すなわち、VRシステムSは、いわゆる没入型のシステムとして構成されている。 When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS, and the user U himself exists in the virtual space as the avatar A. Be recognized. That is, the VR system S is configured as a so-called immersive system.
 なお、VRシステムSでは、ユーザU及びアイテムIの現実空間RSにおける状態を認識するシステムとして、標識1とカメラ2とサーバ3とによって構成された、いわゆるモーションキャプチャー装置を備えている。 The VR system S includes a so-called motion capture device composed of a marker 1, a camera 2, and a server 3 as a system for recognizing the states of the user U and the item I in the real space RS.
 ここで、「状態」とは、ユーザUの意図又は操作に応じて変化し得るものを指す。例えば、身体又は機構の動作状態、座標の位置、姿勢、方向、形状、色、大きさ等を指す。そのため、状態の「変化」とは、動作の開始、進行及び停止、座標の移動、姿勢、方向、形状、色、大きさの変化等を指す。 Here, the "state" refers to something that can change according to the intention or operation of the user U. For example, it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change" of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
 しかし、本発明の仮想空間体感システムは、このような構成に限定されるものではない。例えば、モーションキャプチャー装置を使用する場合には、上記の構成のものの他、標識及びカメラの数が上記構成とは異なる(例えば、それぞれ1つずつ設けられている)ものを用いてもよい。 However, the virtual space experience system of the present invention is not limited to such a configuration. For example, when a motion capture device is used, in addition to the above configuration, one having a different number of signs and cameras from the above configuration (for example, one is provided for each) may be used.
 また、モーションキャプチャー装置以外の装置を用いて、ユーザの現実空間における状態を認識するようにしてもよい。具体的には、例えば、HMDにGPS等のセンサを搭載し、そのセンサからの出力に基づいて、プレイヤーの状態を認識するようにしてもよい。また、そのようなセンサと、上記のようなモーションキャプチャー装置を併用してもよい。 Further, a device other than the motion capture device may be used to recognize the state in the user's real space. Specifically, for example, a sensor such as GPS may be mounted on the HMD, and the state of the player may be recognized based on the output from the sensor. Further, such a sensor may be used in combination with a motion capture device as described above.
[処理部の構成]
 次に、図2及び図3を用いて、サーバ3の構成を詳細に説明する。
[Configuration of processing unit]
Next, the configuration of the server 3 will be described in detail with reference to FIGS. 2 and 3.
 サーバ3は、CPU、RAM、ROM、インターフェース回路等を含む1つ又は複数の電子回路ユニットにより構成されている。図2に示すように、サーバ3は、実装されたハードウェア構成又はプログラムにより実現される機能として、仮想空間生成部30と、ユーザ状態認識部31と、アバター制御部32と、アイテム状態認識部33と、オブジェクト制御部34と、出力情報決定部35とを備えている。 The server 3 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like. As shown in FIG. 2, the server 3 has a virtual space generation unit 30, a user state recognition unit 31, an avatar control unit 32, and an item state recognition unit as functions realized by the implemented hardware configuration or program. 33, an object control unit 34, and an output information determination unit 35 are provided.
 図3に示すように、仮想空間生成部30は、ユーザUが存在している現実空間RSに対応する仮想空間VSの背景となる画像、並びに、仮想空間VSに存在するアバター5及び所定のオブジェクトの画像を生成する。また、仮想空間生成部30は、それらの画像に関連する音声も生成する。 As shown in FIG. 3, the virtual space generation unit 30 includes an image as a background of the virtual space VS corresponding to the real space RS in which the user U exists, an avatar 5 existing in the virtual space VS, and a predetermined object. Generate an image of. The virtual space generation unit 30 also generates sounds related to those images.
 本実施形態では、仮想空間VSにおけるアバター5の移動範囲を、ユーザUが存在する現実空間RSの部屋に対応した範囲としている。そのため、仮想空間生成部30によって生成される仮想空間VSの画像の大きさ(すなわち、仮想空間VSの広さ)は、その現実空間RSの部屋に対応したものとなっている。 In the present embodiment, the movement range of the avatar 5 in the virtual space VS is the range corresponding to the room of the real space RS in which the user U exists. Therefore, the size of the image of the virtual space VS generated by the virtual space generation unit 30 (that is, the size of the virtual space VS) corresponds to the room of the real space RS.
 仮想空間生成部30によって生成されるアバター5は、本実施形態では、動物を擬人化したものとなっており、人の動作に対応して動作する。なお、仮想空間生成部30は、複数のユーザUが存在する場合には、それぞれのユーザUに対応するようにして、複数のアバターを生成する。 In the present embodiment, the avatar 5 generated by the virtual space generation unit 30 is anthropomorphic to an animal, and operates in response to a human motion. When a plurality of users U exist, the virtual space generation unit 30 generates a plurality of avatars so as to correspond to each user U.
 仮想空間生成部30によって生成される所定のオブジェクトには、第1オブジェクト6と、仮想空間VSで第1オブジェクト6と異なる座標に位置する第2オブジェクト7とが含まれる。具体的には、第1オブジェクト6は、ユーザUがアバター5を介してその全体像を認識できる程度に、アバター5から離れた位置に生成される。第2オブジェクト7は、アバター5の手の届く範囲に生成される。 The predetermined object generated by the virtual space generation unit 30 includes a first object 6 and a second object 7 located at a coordinate different from that of the first object 6 in the virtual space VS. Specifically, the first object 6 is generated at a position far from the avatar 5 so that the user U can recognize the whole image through the avatar 5. The second object 7 is generated within the reach of the avatar 5.
 本実施形態では、第1オブジェクト6は、基盤60上に設立された第1ビル61a、第2ビル61b及び第3ビル61cにより構成されている。なお、以下においては、これらのビルを総称する場合には「ビル61」という。 In the present embodiment, the first object 6 is composed of the first building 61a, the second building 61b, and the third building 61c established on the base 60. In the following, these buildings are collectively referred to as "building 61".
 第2オブジェクト7は、基盤60に対応するプレート70上に載置されている第1模型71a、第2模型71b及び第3模型71cにより構成されている。なお、以下においては、これらの模型を総称する場合には「模型71」という。 The second object 7 is composed of a first model 71a, a second model 71b, and a third model 71c mounted on a plate 70 corresponding to the base 60. In the following, these models are collectively referred to as "model 71".
 第2オブジェクト7は、第1オブジェクト6を、アバター5の手を介して操作可能な程度に縮小したものである。すなわち、第2オブジェクト7の形状は、第1オブジェクト6と同一であるが、第2オブジェクト7の大きさは、第1オブジェクト6よりも小さくなっている。 The second object 7 is a reduction of the first object 6 to the extent that it can be operated through the hand of the avatar 5. That is, the shape of the second object 7 is the same as that of the first object 6, but the size of the second object 7 is smaller than that of the first object 6.
 図2に示すように、ユーザ状態認識部31は、カメラ2が撮影した標識1を含むユーザUの画像データに基づいて、ユーザUの状態を認識する。ユーザ状態認識部31は、ユーザ姿勢認識部31aと、ユーザ座標認識部31bと、ユーザ方向認識部31cとを有している。 As shown in FIG. 2, the user state recognition unit 31 recognizes the state of the user U based on the image data of the user U including the sign 1 taken by the camera 2. The user state recognition unit 31 has a user posture recognition unit 31a, a user coordinate recognition unit 31b, and a user direction recognition unit 31c.
 ユーザ姿勢認識部31a、ユーザ座標認識部31b及びユーザ方向認識部31cは、ユーザUの画像データからユーザUに取り付けられている標識1を抽出し、その抽出結果に基づいて、ユーザUの姿勢、座標及び方向を認識する。 The user posture recognition unit 31a, the user coordinate recognition unit 31b, and the user direction recognition unit 31c extract the marker 1 attached to the user U from the image data of the user U, and based on the extraction result, the posture of the user U, Recognize coordinates and directions.
 アバター制御部32は、ユーザ状態認識部31によって認識されたユーザUの状態の変化に応じて、アバター5の状態(具体的には、姿勢、座標及び方向の変化)を制御する。 The avatar control unit 32 controls the state of the avatar 5 (specifically, the change in posture, coordinates, and direction) according to the change in the state of the user U recognized by the user state recognition unit 31.
 アイテム状態認識部33は、カメラ2が撮影した標識1を含むアイテムIの画像データに基づいて、アイテムIの状態を認識する。アイテム状態認識部33は、アイテム姿勢認識部33aと、アイテム座標認識部33bと、アイテム方向認識部33cとを有している。 The item state recognition unit 33 recognizes the state of the item I based on the image data of the item I including the sign 1 taken by the camera 2. The item state recognition unit 33 has an item posture recognition unit 33a, an item coordinate recognition unit 33b, and an item direction recognition unit 33c.
 アイテム姿勢認識部33a、アイテム座標認識部33b及びアイテム方向認識部33cは、アイテムIの画像データからアイテムIに取り付けられている標識1を抽出し、その抽出結果に基づいて、アイテムIの姿勢、座標及び方向を認識する。 The item posture recognition unit 33a, the item coordinate recognition unit 33b, and the item direction recognition unit 33c extract the marker 1 attached to the item I from the image data of the item I, and based on the extraction result, the posture of the item I, Recognize coordinates and directions.
 オブジェクト制御部34は、アバター制御部32によって認識されたアバター5の動作、及び、アイテム状態認識部33によって認識されたアイテムIの状態に応じて、第1オブジェクト6及び第2オブジェクト7の状態を制御する。オブジェクト制御部34は、第1オブジェクト制御部34aと、第2オブジェクト制御部34bとを有している。 The object control unit 34 changes the states of the first object 6 and the second object 7 according to the operation of the avatar 5 recognized by the avatar control unit 32 and the state of the item I recognized by the item state recognition unit 33. Control. The object control unit 34 has a first object control unit 34a and a second object control unit 34b.
 第2オブジェクト制御部34bは、アバター5の動作又はアイテムIの状態の変化に応じて、第2オブジェクト7の状態を制御する。第1オブジェクト制御部34aは、第2オブジェクト7の状態の変化に応じて、第1オブジェクト6の状態を制御する。 The second object control unit 34b controls the state of the second object 7 according to the operation of the avatar 5 or the change of the state of the item I. The first object control unit 34a controls the state of the first object 6 according to the change of the state of the second object 7.
 なお、後述するように、第2オブジェクト7の姿勢、座標及び方向は、アイテムIの姿勢、座標及び方向対応したものとなる。また、第1オブジェクト6の姿勢(個々のビル61の方向)及び座標は、第2オブジェクト7の姿勢(個々の模型71の方向)及び座標に対応したものとなる。一方、第1オブジェクト6の方向(基盤60のヨー軸周りの位相)は、第2オブジェクト7の方向(プレート70のヨー軸周りの位相)に関わらず固定され、常に一定となる(図5、図7及び図8参照)。 As will be described later, the posture, coordinates, and direction of the second object 7 correspond to the posture, coordinates, and direction of the item I. Further, the posture (direction of each building 61) and coordinates of the first object 6 correspond to the posture (direction of each model 71) and coordinates of the second object 7. On the other hand, the direction of the first object 6 (phase around the yaw axis of the base 60) is fixed and always constant regardless of the direction of the second object 7 (phase around the yaw axis of the plate 70) (FIG. 5, FIG. 7 and 8).
 出力情報決定部35は、HMD4を介して、ユーザUに認識させる仮想空間VSに関する情報を決定する。この出力情報決定部35は、画像決定部35aと、音声決定部35bとを有している。 The output information determination unit 35 determines the information regarding the virtual space VS to be recognized by the user U via the HMD4. The output information determination unit 35 has an image determination unit 35a and an audio determination unit 35b.
 画像決定部35aは、アバター5、第1オブジェクト6及び第2オブジェクト7の状態に基づいて、HMD4のモニタ40を介して、そのアバター5に対応するユーザUに認識させる仮想空間VSの画像を決定する。 The image determination unit 35a determines an image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the monitor 40 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. do.
 音声決定部35bは、アバター5、第1オブジェクト6及び第2オブジェクト7の状態に基づいて、HMD4のスピーカ41を介して、そのアバター5に対応するユーザUに認識させる仮想空間VSの画像に関連する音声を決定する。 The voice determination unit 35b is related to the image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the speaker 41 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. Determine the voice to be played.
 なお、本発明の仮想空間体感システムを構成する各処理部は、上記のような構成に限定されるものではない。 Note that each processing unit constituting the virtual space experience system of the present invention is not limited to the above configuration.
 例えば、本実施形態においてサーバ3に設けられている処理部の一部を、HMD4に設けてもよい。また、複数のサーバを用いてもよいし、サーバを省略してHMDに搭載されているCPUを協働させて構成してもよい。 For example, a part of the processing unit provided in the server 3 in the present embodiment may be provided in the HMD 4. Further, a plurality of servers may be used, or the servers may be omitted and the CPUs mounted on the HMD may be linked to each other.
 また、HMDに搭載されているスピーカ以外のスピーカを設けてもよい。また、視覚及び聴覚へ影響を与えるデバイスの他、仮想空間に応じた匂い、風等を生じさせるような嗅覚及び触覚へ影響を与えるデバイスを含めてもよい。 Further, a speaker other than the speaker mounted on the HMD may be provided. Further, in addition to the device that affects the visual sense and the auditory sense, the device that affects the sense of smell and the sense of touch that causes odor, wind, etc. according to the virtual space may be included.
[実行される処理]
 次に、図2~図8を参照して、VRシステムSを用いてユーザUに仮想現実を体感させる際に、VRシステムSの実行する処理について説明する。なお、このVRシステムSは、仮想空間VSにおいて、ビル群を構成する各ビルの配置の検証を行うためものである。
[Process to be executed]
Next, with reference to FIGS. 2 to 8, the process executed by the VR system S when the user U is made to experience the virtual reality by using the VR system S will be described. It should be noted that this VR system S is for verifying the arrangement of each building constituting the building group in the virtual space VS.
[使用が開始された際における処理]
 まず、図2~図5を参照して、ユーザUがVRシステムSの使用を開始した際に、VRシステムSの各処理部が実行する処理について説明する。
[Processing when use starts]
First, with reference to FIGS. 2 to 5, when the user U starts using the VR system S, the processing executed by each processing unit of the VR system S will be described.
 この処理においては、まず、サーバ3の仮想空間生成部30が、仮想空間VS、並びに、その仮想空間VSに存在させるアバター5及び各種オブジェクトを生成する(図4/STEP101)。 In this process, first, the virtual space generation unit 30 of the server 3 generates the virtual space VS, the avatar 5 to exist in the virtual space VS, and various objects (FIG. 4 / STEP101).
 具体的には、図3に示すように、仮想空間生成部30が、仮想空間VSの背景となる画像を生成する。また、仮想空間生成部30は、カメラ2が撮影したユーザUの画像に基づいて、仮想空間VSに存在させるアバター5の画像を生成する。また、仮想空間生成部30は、カメラ2が撮影したアイテムIの画像に基づいて、第1オブジェクト6及び第2オブジェクト7の画像を生成する Specifically, as shown in FIG. 3, the virtual space generation unit 30 generates an image as a background of the virtual space VS. Further, the virtual space generation unit 30 generates an image of the avatar 5 existing in the virtual space VS based on the image of the user U taken by the camera 2. Further, the virtual space generation unit 30 generates images of the first object 6 and the second object 7 based on the image of the item I taken by the camera 2.
 次に、サーバ3のアバター制御部32が、現実空間RSにおけるユーザUの状態に基づいて、仮想空間VSにおけるアバター5の状態を決定する(図4/STEP102)。 Next, the avatar control unit 32 of the server 3 determines the state of the avatar 5 in the virtual space VS based on the state of the user U in the real space RS (FIG. 4 / STEP102).
 具体的には、まず、ユーザ状態認識部31が、カメラ2が撮影したユーザUの状態に基づいて、現実空間RSにおけるユーザUの状態を認識する。その後、アバター制御部32が、認識されたユーザUの状態に基づいて、アバター5の状態を決定する。 Specifically, first, the user state recognition unit 31 recognizes the state of the user U in the real space RS based on the state of the user U taken by the camera 2. After that, the avatar control unit 32 determines the state of the avatar 5 based on the recognized state of the user U.
 次に、サーバ3のオブジェクト制御部34が、現実空間RSにおけるアイテムIの状態に基づいて、仮想空間VSにおける第1オブジェクト6及び第2オブジェクト7の状態を決定する(図4/STEP103)。 Next, the object control unit 34 of the server 3 determines the states of the first object 6 and the second object 7 in the virtual space VS based on the state of the item I in the real space RS (FIG. 4 / STEP103).
 具体的には、まず。アイテム状態認識部33が、カメラ2が撮影したアイテムIの状態に基づいて、現実空間RSにおけるアイテムIの状態を認識する。その後、オブジェクト制御部34の第1オブジェクト制御部34aが、認識されたアイテムIの状態に基づいて、第1オブジェクト6の状態を決定する。また、オブジェクト制御部34の第2オブジェクト制御部34bが、認識されたアイテムIの状態に基づいて、第2オブジェクト7の状態を決定する。 Specifically, first. The item state recognition unit 33 recognizes the state of the item I in the real space RS based on the state of the item I taken by the camera 2. After that, the first object control unit 34a of the object control unit 34 determines the state of the first object 6 based on the recognized state of the item I. Further, the second object control unit 34b of the object control unit 34 determines the state of the second object 7 based on the recognized state of the item I.
 次に、サーバ3の出力情報決定部35の画像決定部35a及び音声決定部35bが、ユーザUに認識させる画像及び音声を、仮想空間VSにおけるアバター5、第1オブジェクト6及び第2オブジェクト7の状態に基づいて決定する(図4/STEP104)。 Next, the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 4 / STEP104).
 次に、ユーザUの装着しているHMD4が、そのモニタ40に決定された画像を表示させるとともに、そのスピーカ41に決定された音声を発生させて(図4/STEP105)、今回の処理を終了する。 Next, the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 4 / STEP105), and ends the current process. do.
 これらの処理によって、図5に示すように、ユーザUは、操作対象である第1オブジェクト6が、その全体像が見える位置に存在しており、且つ、操作媒体である第2オブジェクト7が、アバター5の手の届く位置に存在しているという仮想空間VSに、自分自身が存在しているかのように認識させられた状態となる。 By these processes, as shown in FIG. 5, the user U has the first object 6 to be operated at a position where the whole image can be seen, and the second object 7 which is an operation medium has a second object 7. The virtual space VS, which exists within the reach of the avatar 5, is in a state of being recognized as if it exists.
[オブジェクトが操作された際における処理]
 次に、図2、図5~図8を参照して、ユーザUがVRシステムSの使用を開始した後、アバター5を介して操作媒体である第2オブジェクト7を操作した際に、VRシステムSの各処理部が実行する処理について説明する。
[Processing when an object is manipulated]
Next, referring to FIGS. 2 and 5 to 8, when the user U starts using the VR system S and then operates the second object 7 which is an operation medium via the avatar 5, the VR system is used. The processing executed by each processing unit of S will be described.
 ここで、本実施形態における操作対象である第1オブジェクト6は、基盤60と、その基盤60に設立された複数のビル61とによって構成されている。また、第2オブジェクト7は、基盤60と同一の形状又は相似した形状のプレート70と、そのプレート70に載置されており、ビル61と同一の形状又は相似した形状の複数のビルの模型71とによって構成されている。 Here, the first object 6 to be operated in the present embodiment is composed of a base 60 and a plurality of buildings 61 established on the base 60. Further, the second object 7 is mounted on a plate 70 having the same shape or a similar shape as the base 60, and a model 71 of a plurality of buildings having the same shape or a similar shape as the building 61. It is composed of and.
 また、第2オブジェクト7に対する「操作」とは、プレート70を回転させることによる第2オブジェクト7全体のヨー軸周りの回転、模型71の座標及び姿勢(プレート70に対する向き)を変化させることによるプレート70上のレイアウトの変更を指す。 Further, the "operation" for the second object 7 means the rotation of the entire second object 7 around the yaw axis by rotating the plate 70, and the plate by changing the coordinates and posture (direction with respect to the plate 70) of the model 71. 70 Refers to the layout change on.
 この処理においては、まず、ユーザ状態認識部31が、ユーザUの状態が変化したか否かを判定する(図6/STEP201)。 In this process, first, the user state recognition unit 31 determines whether or not the state of the user U has changed (FIG. 6 / STEP201).
 ユーザUの状態が変化していない場合(STEP201でNOの場合)、STEP201に戻り、所定の制御周期でSTEP201の判定を再度実行する。 If the state of the user U has not changed (NO in STEP201), the process returns to STEP201 and the determination of STEP201 is executed again at a predetermined control cycle.
 一方、ユーザUの状態が変化した場合(STEP201でYESの場合)、アバター制御部32は、ユーザUの状態の変化に基づいて、アバター5の状態を変化させる(図6/STEP202)。 On the other hand, when the state of the user U changes (YES in STEP201), the avatar control unit 32 changes the state of the avatar 5 based on the change in the state of the user U (FIG. 6 / STEP202).
 次に、オブジェクト制御部34が、アバター制御部32によって実行されるアバター5の状態の変化(すなわち、動作)が操作媒体である第2オブジェクト7を操作するような動作であるか否かを判定する(図6/STEP203)。 Next, the object control unit 34 determines whether or not the change in the state of the avatar 5 (that is, the operation) executed by the avatar control unit 32 is an operation such as operating the second object 7 which is an operation medium. (Fig. 6 / STEP203).
 具体的には、オブジェクト制御部34は、仮想空間VSにおける第2オブジェクト7に対するアバター5の姿勢、座標及び方向の変化が所定の姿勢、座標及び方向の変化に該当するか否かに基づいて、アバター5の動作が操作媒体である第2オブジェクト7を操作するような動作であるか否かを、判定する。 Specifically, the object control unit 34 determines whether or not the change in the posture, coordinates, and direction of the avatar 5 with respect to the second object 7 in the virtual space VS corresponds to the change in the predetermined posture, coordinates, and direction. It is determined whether or not the operation of the avatar 5 is an operation such as operating the second object 7 which is an operation medium.
 アバター5の動作が第2オブジェクト7を操作するような動作である場合(STEP203でYESの場合)、オブジェクト制御部34の第2オブジェクト制御部34bが、アバター5による操作に基づいて、第2オブジェクト7の姿勢、座標及び方向を変化させる(図6/STEP204)。 When the operation of the avatar 5 is such that the operation of the second object 7 is performed (YES in STEP 203), the second object control unit 34b of the object control unit 34 controls the second object based on the operation by the avatar 5. The posture, coordinates, and direction of No. 7 are changed (FIG. 6 / STEP204).
 次に、オブジェクト制御部34の第1オブジェクト制御部34aが、第2オブジェクト7の姿勢及び座標の変化に基づいて、第1オブジェクト6の姿勢及び座標を変化させる(図6/STEP205)。 Next, the first object control unit 34a of the object control unit 34 changes the posture and coordinates of the first object 6 based on the change of the posture and coordinates of the second object 7 (FIG. 6 / STEP205).
 具体的には、これらのSTEP203~205の処理においては、例えば、図5に示すように、操作媒体である第2オブジェクト7をアバター5によって操作する前の状態から、図7に示すように、第2オブジェクト7に含まれる第1模型71aの位置と第2模型71bの位置とを置き換え、且つ、それらの向きを調整したとする。すなわち、第1模型71a及び第2模型71bの座標及び姿勢(それらのプレート70に対する位置又は向き)を変化させたとする。 Specifically, in the processing of STEP 203 to 205, for example, as shown in FIG. 5, from the state before the operation medium, the second object 7, is operated by the avatar 5, as shown in FIG. It is assumed that the positions of the first model 71a and the positions of the second model 71b included in the second object 7 are replaced, and their orientations are adjusted. That is, it is assumed that the coordinates and postures (positions or orientations thereof with respect to the plate 70) of the first model 71a and the second model 71b are changed.
 この場合、操作対象である第1オブジェクト6においても、第2オブジェクト7の第1模型71a及び第2模型71bの座標及び姿勢の変化に応じて、それらに対応する第1ビル61a及び第2ビル61bの座標及び姿勢(それらの基盤60に対する位置及び向き)が、図5に示す状態から図7に示す状態に変化する。 In this case, even in the first object 6 which is the operation target, the first building 61a and the second building corresponding to the changes in the coordinates and postures of the first model 71a and the second model 71b of the second object 7. The coordinates and orientations of 61b (positions and orientations thereof with respect to the base 60) change from the state shown in FIG. 5 to the state shown in FIG. 7.
 ただし、例えば、図5に示すように、操作媒体である第2オブジェクト7をアバター5によって操作する前の状態から、図8に示すように、第2オブジェクト7に含まれるプレート70の方向(ヨー軸周りの位相)を変化させて、第2オブジェクト7全体の方向を変化させてから、第3模型71cの座標を変化させたとする。 However, for example, as shown in FIG. 5, the direction (yaw) of the plate 70 included in the second object 7 is as shown in FIG. 8 from the state before the operation medium, the second object 7, is operated by the avatar 5. It is assumed that the (phase around the axis) is changed to change the direction of the entire second object 7, and then the coordinates of the third model 71c are changed.
 この場合、操作対象である第1オブジェクト6においては、第2オブジェクト7のプレート70の方向の変化に関わらず、そのプレート70に対応する基盤60の方向(ヨー軸周りの位相)が、図5に示す状態から変化せず、第3模型71cに対応する第3ビル61cの座標(基盤60又は第1ビル61a若しくは第2ビル61bに対する相対的な位置)のみが変化する。すなわち、ユーザUに対する第1オブジェクト6の方向は変化しない。 In this case, in the first object 6 to be operated, the direction (phase around the yaw axis) of the base 60 corresponding to the plate 70 is the direction of the plate 70 of the second object 7, regardless of the change in the direction of the plate 70. It does not change from the state shown in the above, and only the coordinates of the third building 61c corresponding to the third model 71c (the position relative to the base 60 or the first building 61a or the second building 61b) change. That is, the direction of the first object 6 with respect to the user U does not change.
 次に、サーバ3の出力情報決定部35の画像決定部35a及び音声決定部35bが、ユーザUに認識させる画像及び音声を、仮想空間VSにおけるアバター5、第1オブジェクト6及び第2オブジェクト7の状態に基づいて決定する(図6/STEP206)。 Next, the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 6 / STEP206).
 同様に、アバター5の動作が第2オブジェクト7を操作するような動作でない場合(STEP203でNOの場合)、サーバ3の出力情報決定部35の画像決定部35a及び音声決定部35bが、ユーザUに認識させる画像及び音声を、仮想空間VSにおけるアバター5、第1オブジェクト6及び第2オブジェクト7の状態に基づいて決定する(図6/STEP206)。 Similarly, when the operation of the avatar 5 is not an operation such as operating the second object 7 (NO in STEP 203), the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are used by the user U. The image and sound to be recognized by the server are determined based on the states of the avatar 5, the first object 6, and the second object 7 in the virtual space VS (FIG. 6 / STEP206).
 次に、ユーザUの装着しているHMD4が、そのモニタ40に決定された画像を表示させるとともに、そのスピーカ41に決定された音声を発生させて(図6/STEP207)、今回の処理を終了する。 Next, the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 6 / STEP207), and ends the current process. do.
 VRシステムSは、ユーザUによる終了の指示が認識されるまで、所定の制御周期で、上記の処理を繰り返し実行する。 The VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
 以上説明したように、VRシステムSでは、第2オブジェクト7がアバター5の動作を介してユーザUに操作された際に、その操作による第2オブジェクト7の状態の変化に応じて、第1オブジェクト6の状態が変化するように構成されている。すなわち、第1オブジェクト6を操作対象とし、第2オブジェクト7をその操作媒体としている。 As described above, in the VR system S, when the second object 7 is operated by the user U via the operation of the avatar 5, the first object is changed according to the change in the state of the second object 7 due to the operation. It is configured to change the state of 6. That is, the first object 6 is the operation target, and the second object 7 is the operation medium.
 ここで、第2オブジェクト7は、第1オブジェクト6に対応した形状を有している。そのため、ユーザUは、操作媒体である第2オブジェクト7に対して、実際の操作対象である第1オブジェクト6に対するものと同様の操作を行うことができる。 Here, the second object 7 has a shape corresponding to the first object 6. Therefore, the user U can perform the same operation on the second object 7, which is the operation medium, as on the first object 6, which is the actual operation target.
 これにより、第1オブジェクト6を直接的に操作しているような印象を、ユーザUに与えることができる。ひいては、操作対象となる第1オブジェクト6に対する直感的な操作を阻害されているような印象をユーザUに与えてしまうことを抑制することができる。その結果、ユーザUの仮想空間VSへの没入感を低減させてしまうことを抑制することができる。 This gives the user U the impression that the first object 6 is being directly operated. As a result, it is possible to prevent the user U from giving an impression that the intuitive operation of the first object 6 to be operated is hindered. As a result, it is possible to suppress the user U from reducing the immersive feeling in the virtual space VS.
 ところで、操作対象となる第1オブジェクト6の状態と、操作媒体となる第2オブジェクト7の状態とを、完全に同期させてしまうと、ユーザUが、第1オブジェクト6の状態の変化を、逆に把握しにくくなる場合がある。 By the way, when the state of the first object 6 to be operated and the state of the second object 7 to be the operation medium are completely synchronized, the user U reverses the change in the state of the first object 6. It may be difficult to grasp.
 具体的には、本実施形態では、第1オブジェクト6の方向と第2オブジェクト7の方向とを完全に同期させてしまうと、第1オブジェクト6の方向が頻繁に切り替わってしまい、ユーザUが、第1オブジェクト6の全体としての状態の変化を、逆に把握しにくくなってしまうおそれがある。 Specifically, in the present embodiment, if the direction of the first object 6 and the direction of the second object 7 are completely synchronized, the direction of the first object 6 is frequently switched, and the user U can use the user U. On the contrary, it may be difficult to grasp the change in the state of the first object 6 as a whole.
 そこで、VRシステムSでは、第1オブジェクト6の状態のうち方向を固定したまま、姿勢及び座標については第2オブジェクト7に対応させて変化させるように構成している。これにより、ユーザUは、第2オブジェクト7を自由に動かしつつ、第1オブジェクト6の様子を観察しやすくなっている。ひいては、ユーザUは、第1オブジェクト6の状態の変化を容易に把握することができるようになっている。 Therefore, in the VR system S, the posture and coordinates of the state of the first object 6 are configured to be changed according to the second object 7 while the direction is fixed. This makes it easier for the user U to observe the state of the first object 6 while freely moving the second object 7. As a result, the user U can easily grasp the change in the state of the first object 6.
 また、VRシステムSのように、第1オブジェクト6が大きいものである場合、第2オブジェクト7が第1オブジェクトと同様の大きさであると、ユーザUがユーザU自身に対応するアバター5を第2オブジェクト7に対して動作させにくい場合がある。 Further, when the first object 6 is large as in the VR system S and the second object 7 has the same size as the first object, the user U sets the avatar 5 corresponding to the user U himself. 2 It may be difficult to operate on the object 7.
 そこで、VRシステムSでは、第2オブジェクト7は、第1オブジェクト6と同一の形状一又は相似した形状を有し、且つ、アバター5の手で操作ができる程度に第1オブジェクト6よりも小さく構成されている。これにより、ユーザUがアバター5を第2オブジェクト7に対して容易に動作させることができるようになっている。 Therefore, in the VR system S, the second object 7 has the same shape as or similar to that of the first object 6, and is smaller than the first object 6 so that it can be operated by the hand of the avatar 5. Has been done. As a result, the user U can easily operate the avatar 5 with respect to the second object 7.
[その他の実施形態]
 以上、図示の実施形態について説明したが、本発明はこのような形態に限定されるものではない。
[Other embodiments]
Although the illustrated embodiment has been described above, the present invention is not limited to such an embodiment.
 例えば、上記実施形態では、操作媒体である第2オブジェクト7の形状は、操作対象である第1オブジェクト6と同一であるが、第2オブジェクト7の大きさは、第1オブジェクト6よりも小さくなっている。 For example, in the above embodiment, the shape of the second object 7 which is the operation medium is the same as that of the first object 6 which is the operation target, but the size of the second object 7 is smaller than that of the first object 6. ing.
 しかし、本発明の第2オブジェクトは、このような構成に限定されるものではなく、仮想空間で第1オブジェクトと異なる座標に位置し、第1オブジェクトの少なくとも一部に対応した形状を有し、アバターの動作を介してユーザが操作可能なものであればよい。 However, the second object of the present invention is not limited to such a configuration, is located at coordinates different from the first object in the virtual space, and has a shape corresponding to at least a part of the first object. It suffices as long as it can be operated by the user through the operation of the avatar.
 ここで、「対応した形状」とは、同一の形状の他、相似した形(すなわち、同一の形状でサイズが異なる形状)、デフォルメした形、一部を切り出したものと同じ形状等を含む。 Here, the "corresponding shape" includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like.
 そのため、例えば、第1オブジェクトと第2オブジェクトとを、形状だけでなく大きさも同一としてもよい。また、第1オブジェクトの一部のみを模して、第2オブジェクトを生成してもよい。具体的には、例えば、第1オブジェクトを車の全体とし、第2オブジェクトを車の設計を行う部分(例えば、ボンネットの内部構造)のみとしてもよい。 Therefore, for example, the first object and the second object may have the same size as well as the shape. Further, the second object may be generated by imitating only a part of the first object. Specifically, for example, the first object may be the entire car, and the second object may be only the part for designing the car (for example, the internal structure of the bonnet).
 また、上記実施形態では、操作対象である1つの第1オブジェクト6に対して、操作媒体である第2オブジェクト7を1つ生成している。しかし、本発明のオブジェクトは、このような構成に限定されるものではない。例えば、操作対象である第1オブジェクト6つに対して、操作媒体である第2オブジェクトを複数生成してもよい。逆に、複数の第1オブジェクトに対して、第2オブジェクトを1つだけ生成してもよい。 Further, in the above embodiment, one second object 7 which is an operation medium is generated for one first object 6 which is an operation target. However, the object of the present invention is not limited to such a configuration. For example, a plurality of second objects as operation media may be generated for six first objects to be operated. On the contrary, only one second object may be generated for a plurality of first objects.
 また、上記実施形態では、操作対象である第1オブジェクト6の姿勢及び座標については、操作媒体である第2オブジェクト7の姿勢及び座標の変化に応じて変化させている。その一方で、第1オブジェクト6の方向については、第2オブジェクト7の方向の変化に関わらず固定している。しかし、本発明における第1オブジェクトの状態の変化は、このような構成に限定されるものではなく、第2オブジェクトの状態の変化に対応したものであればよい。 Further, in the above embodiment, the posture and coordinates of the first object 6 which is the operation target are changed according to the change of the posture and coordinates of the second object 7 which is the operation medium. On the other hand, the direction of the first object 6 is fixed regardless of the change in the direction of the second object 7. However, the change in the state of the first object in the present invention is not limited to such a configuration, and may be any one corresponding to the change in the state of the second object.
 ここで、前述のように、本発明における「状態」とは、ユーザUの意図又は操作に応じて変化し得るものを指す。例えば、身体又は機構の動作状態、座標の位置、姿勢、方向、形状、色、大きさ等を指す。そのため、状態の「変化」とは、動作の開始、進行及び停止、座標の移動、姿勢、方向、形状、色、大きさの変化等を指す。 Here, as described above, the "state" in the present invention refers to a state that can change according to the intention or operation of the user U. For example, it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change" of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
 そのため、例えば、上記実施形態において、第1オブジェクト6の方向についても、第2オブジェクト7の状態の変化に応じて変化させてもよい。また、例えば、第1オブジェクトの形状、色、大きさ等についても、第2オブジェクトの状態に応じて変化させてもよい。 Therefore, for example, in the above embodiment, the direction of the first object 6 may also be changed according to the change in the state of the second object 7. Further, for example, the shape, color, size, etc. of the first object may be changed according to the state of the second object.
 このとき、必ずしもその変化は完全に一致したものである必要はない。例えば、第2オブジェクトの状態の変化よりも遅れて、第1オブジェクトの状態を変化させてもよい。また、例えば、上記実施形態のように、複数の状態のうちの一部のみを対応させて変化させるようにしてもよい。なお、アバターの動作等によるユーザの操作によって、一部のみを対応させて変化させる場合と、全てを対応させて変化させる場合とを切り換えられるようにしてもよい。 At this time, the changes do not necessarily have to be in perfect agreement. For example, the state of the first object may be changed later than the change of the state of the second object. Further, for example, as in the above embodiment, only a part of the plurality of states may be changed in correspondence with each other. It should be noted that, by the user's operation by the operation of the avatar or the like, it may be possible to switch between the case where only a part is made to correspond and the case where the change is made and the case where the whole is made to correspond.
 また、上記実施形態では、仮想空間生成部30が、認識されたアイテムIに対応させて第2オブジェクト7を生成し、第2オブジェクト制御部34bが、アイテム状態認識部33によって認識されたアイテムIの状態に応じて、第2オブジェクト7の状態を制御している。 Further, in the above embodiment, the virtual space generation unit 30 generates the second object 7 corresponding to the recognized item I, and the second object control unit 34b is the item I recognized by the item state recognition unit 33. The state of the second object 7 is controlled according to the state of.
 これは、操作媒体(すなわち、ユーザUが触る対象)である第2オブジェクト7を現実空間に存在しているアイテムIに対応させて生成するようにすることによって、ユーザUに触感を与えて、ユーザUの仮想空間VSへの没入感を高めるためである。 This gives the user U a tactile sensation by generating the second object 7, which is the operation medium (that is, the object to be touched by the user U), in correspondence with the item I existing in the real space. This is to enhance the immersive feeling of the user U in the virtual space VS.
 しかし、本発明の仮想空間体感システムは、このような構成に限定されるものではなく、第2オブジェクトに対応するアイテムは存在していなくてもよい。このように構成した場合、操作媒体である第2オブジェクトを、操作対象である第1オブジェクトに対応させて自由に生成できる。 However, the virtual space experience system of the present invention is not limited to such a configuration, and the item corresponding to the second object may not exist. With this configuration, the second object, which is the operation medium, can be freely generated in correspondence with the first object, which is the operation target.
 また、このように構成した場合、第1オブジェクト及び第2オブジェクトの制御は、上記実施形態のようにアイテムの状態に基づいて行うことはできないので、それらの制御は、アバターの動作(ひいては、アバターの制御の基礎となるユーザの状態の変化)のみに応じて、制御すればよい。 Further, in this configuration, the control of the first object and the second object cannot be performed based on the state of the item as in the above embodiment, so that the control thereof is the operation of the avatar (and by extension, the avatar). It suffices to control only according to the change in the state of the user, which is the basis of the control of the above.
1…標識、2…カメラ、3…サーバ、4…HMD、5…アバター、6…第1オブジェクト、7…第2オブジェクト、30…仮想空間生成部、31…ユーザ状態認識部、31a…ユーザ姿勢認識部、31b…ユーザ座標認識部、31c…ユーザ方向認識部、32…アバター制御部、33…アイテム状態認識部、33a…アイテム姿勢認識部、33b…アイテム座標認識部、33c…アイテム方向認識部、34…オブジェクト制御部、34a…第1オブジェクト制御部、34b…第2オブジェクト制御部、35…出力情報決定部、35a…画像決定部、35b…音声決定部、40…モニタ(画像表示器)、41…スピーカ(音声発生器)、60…基盤、61…ビル、61a…第1ビル、61b…第2ビル、61c…第3ビル、70…プレート、71…模型、71a…第1模型、71b…第2模型、71c…第3模型、I…アイテム、U…ユーザ、RS…現実空間、S…VRシステム(仮想空間体感システム)、VS…仮想空間。 1 ... sign, 2 ... camera, 3 ... server, 4 ... HMD, 5 ... avatar, 6 ... first object, 7 ... second object, 30 ... virtual space generation unit, 31 ... user state recognition unit, 31a ... user attitude Recognition unit, 31b ... User coordinate recognition unit, 31c ... User direction recognition unit, 32 ... Avatar control unit, 33 ... Item state recognition unit, 33a ... Item attitude recognition unit, 33b ... Item coordinate recognition unit, 33c ... Item direction recognition unit , 34 ... object control unit, 34a ... first object control unit, 34b ... second object control unit, 35 ... output information determination unit, 35a ... image determination unit, 35b ... voice determination unit, 40 ... monitor (image display) , 41 ... speaker (voice generator), 60 ... base, 61 ... building, 61a ... first building, 61b ... second building, 61c ... third building, 70 ... plate, 71 ... model, 71a ... first model, 71b ... 2nd model, 71c ... 3rd model, I ... item, U ... user, RS ... real space, S ... VR system (virtual space experience system), VS ... virtual space.

Claims (4)

  1.  ユーザが存在している現実空間に対応し、前記ユーザに対応するアバター及び所定のオブジェクトを含む仮想空間を生成する仮想空間生成部と、
     前記ユーザの状態を認識するユーザ状態認識部と、
     認識された前記ユーザの状態に応じて、前記アバターの状態を制御するアバター制御部と、
     前記所定のオブジェクトの状態を制御するオブジェクト制御部と、
     前記アバター及び前記所定のオブジェクトの状態に基づいて、前記ユーザに認識させる前記仮想空間の画像を決定する画像決定部と、
     前記ユーザに、決定された前記仮想空間の画像を認識させる画像表示器とを備えている仮想空間体感システムであって、
     前記所定のオブジェクトは、第1オブジェクト、及び、前記仮想空間で前記第1オブジェクトと異なる座標に位置し、前記第1オブジェクトの少なくとも一部に対応した形状を有し、前記アバターの動作を介して前記ユーザが操作可能な第2オブジェクトを含み、
     前記オブジェクト制御部は、前記第2オブジェクトが前記ユーザに操作された際に、該操作による前記第2オブジェクトの状態の変化に応じて、前記第1オブジェクトの状態を変化させることを特徴とする仮想空間体感システム。
    A virtual space generator that creates a virtual space that corresponds to the real space in which the user exists and includes an avatar and a predetermined object corresponding to the user.
    A user state recognition unit that recognizes the user's state,
    An avatar control unit that controls the state of the avatar according to the recognized state of the user,
    An object control unit that controls the state of the predetermined object,
    An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the avatar and the predetermined object.
    A virtual space experience system including an image display that causes the user to recognize a determined image of the virtual space.
    The predetermined object is located at a coordinate different from that of the first object in the first object and the virtual space, has a shape corresponding to at least a part of the first object, and has a shape corresponding to at least a part of the first object, through the movement of the avatar. Contains a second object that can be manipulated by the user
    The object control unit is characterized in that when the second object is operated by the user, the state of the first object is changed according to the change in the state of the second object due to the operation. Spatial experience system.
  2.  請求項1に記載の仮想空間体感システムにおいて、
     前記オブジェクト制御部は、前記第2オブジェクトが前記ユーザに操作された際に、前記第1オブジェクトの状態のうち座標、姿勢及び方向の少なくとも1つを固定したまま、該操作による前記第2オブジェクトの状態の変化に応じて、前記第1オブジェクトの他の状態を変化させることを特徴とする仮想空間体感システム。
    In the virtual space experience system according to claim 1,
    When the second object is operated by the user, the object control unit keeps at least one of the coordinates, the posture, and the direction of the state of the first object fixed, and the second object is operated by the operation. A virtual space experience system characterized in that the other state of the first object is changed according to the change of the state.
  3.  請求項1又は請求項2に記載の仮想空間体感システムにおいて、
     前記現実空間に存在しているアイテムの状態を認識するアイテム状態認識部を備え、
     前記仮想空間生成部は、認識された前記アイテムに対応させて前記第2オブジェクトを生成することを特徴とする仮想空間体感システム。
    In the virtual space experience system according to claim 1 or 2.
    It is equipped with an item state recognition unit that recognizes the state of items existing in the real space.
    The virtual space generation unit is a virtual space experience system characterized in that the second object is generated in correspondence with the recognized item.
  4.  請求項1~請求項3のいずれか1項に記載の仮想空間体感システムにおいて、
     前記第2オブジェクトは、前記第1オブジェクトと同一の形状又は相似した形状を有し、且つ、前記第1オブジェクトよりも小さいことを特徴とする仮想空間体感システム。
    In the virtual space experience system according to any one of claims 1 to 3.
    The virtual space experience system, wherein the second object has the same shape or a similar shape as the first object, and is smaller than the first object.
PCT/JP2020/020547 2020-05-25 2020-05-25 Virtual space body sensation system WO2021240601A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021520432A JPWO2021240601A1 (en) 2020-05-25 2020-05-25
PCT/JP2020/020547 WO2021240601A1 (en) 2020-05-25 2020-05-25 Virtual space body sensation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/020547 WO2021240601A1 (en) 2020-05-25 2020-05-25 Virtual space body sensation system

Publications (1)

Publication Number Publication Date
WO2021240601A1 true WO2021240601A1 (en) 2021-12-02

Family

ID=78723224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/020547 WO2021240601A1 (en) 2020-05-25 2020-05-25 Virtual space body sensation system

Country Status (2)

Country Link
JP (1) JPWO2021240601A1 (en)
WO (1) WO2021240601A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024095871A1 (en) * 2022-11-02 2024-05-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Method, server, and imaging device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001249747A (en) * 2000-03-03 2001-09-14 Nec Corp Information display device and information display method and recording medium with information display program recorded thereon
JP2017199237A (en) * 2016-04-28 2017-11-02 株式会社カプコン Virtual space display system, game system, virtual space display program and game program
JP2018049629A (en) * 2017-10-10 2018-03-29 株式会社コロプラ Method and device for supporting input in virtual space and program for causing computer to execute the method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001249747A (en) * 2000-03-03 2001-09-14 Nec Corp Information display device and information display method and recording medium with information display program recorded thereon
JP2017199237A (en) * 2016-04-28 2017-11-02 株式会社カプコン Virtual space display system, game system, virtual space display program and game program
JP2018049629A (en) * 2017-10-10 2018-03-29 株式会社コロプラ Method and device for supporting input in virtual space and program for causing computer to execute the method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024095871A1 (en) * 2022-11-02 2024-05-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Method, server, and imaging device

Also Published As

Publication number Publication date
JPWO2021240601A1 (en) 2021-12-02

Similar Documents

Publication Publication Date Title
JP6982215B2 (en) Rendering virtual hand poses based on detected manual input
EP3425481B1 (en) Control device
CN102356373B (en) Virtual object manipulation
TWI412392B (en) Interactive entertainment system and method of operation thereof
JP5639646B2 (en) Real-time retargeting of skeleton data to game avatars
CN102129293B (en) Tracking groups of users in motion capture system
JP2022549853A (en) Individual visibility in shared space
US20160225188A1 (en) Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
US20170039986A1 (en) Mixed Reality Social Interactions
JP2010257461A (en) Method and system for creating shared game space for networked game
JP2010253277A (en) Method and system for controlling movements of objects in video game
US20080225041A1 (en) Method and System for Vision-Based Interaction in a Virtual Environment
JP5116679B2 (en) Intensive computer image and sound processing and input device for interfacing with computer programs
US11334165B1 (en) Augmented reality glasses images in midair having a feel when touched
WO2008065458A2 (en) System and method for moving real objects through operations performed in a virtual environment
WO2019087564A1 (en) Information processing device, information processing method, and program
CN110140100B (en) Three-dimensional augmented reality object user interface functionality
WO2021240601A1 (en) Virtual space body sensation system
KR102057658B1 (en) Apparatus for providing virtual reality-based game interface and method using the same
WO2021261595A1 (en) Vr training system for aircraft, vr training method for aircraft, and vr training program for aircraft
JP6933849B1 (en) Experience-based interface system and motion experience system
CN110363841B (en) Hand motion tracking method in virtual driving environment
JP6341096B2 (en) Haptic sensation presentation device, information terminal, haptic presentation method, and computer-readable recording medium
JP7104539B2 (en) Simulation system and program
JP6933850B1 (en) Virtual space experience system

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021520432

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938191

Country of ref document: EP

Kind code of ref document: A1