WO2021240601A1 - Système de sensation corporelle d'espace virtuel - Google Patents

Système de sensation corporelle d'espace virtuel Download PDF

Info

Publication number
WO2021240601A1
WO2021240601A1 PCT/JP2020/020547 JP2020020547W WO2021240601A1 WO 2021240601 A1 WO2021240601 A1 WO 2021240601A1 JP 2020020547 W JP2020020547 W JP 2020020547W WO 2021240601 A1 WO2021240601 A1 WO 2021240601A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
state
virtual space
avatar
item
Prior art date
Application number
PCT/JP2020/020547
Other languages
English (en)
Japanese (ja)
Inventor
良哉 尾小山
Original Assignee
株式会社Abal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Abal filed Critical 株式会社Abal
Priority to PCT/JP2020/020547 priority Critical patent/WO2021240601A1/fr
Priority to JP2021520432A priority patent/JPWO2021240601A1/ja
Publication of WO2021240601A1 publication Critical patent/WO2021240601A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present invention relates to a virtual space experience system that allows a user to recognize that he / she exists in a virtual space.
  • a virtual space is generated by a server or the like, and an image of the virtual space and an avatar corresponding to the user in the virtual space are presented to the user via a head-mounted display (hereinafter, may be referred to as “HMD”).
  • HMD head-mounted display
  • a motion capture device or the like recognizes a user's state in the real space (for example, body movement, coordinate movement, posture change, etc.), and depending on the recognized state, Some change the state of the avatar (see, for example, Patent Document 1).
  • the state of the game pad type object which is the operation medium in the virtual space is changed according to the state of the user or the controller in the real space, and the state in the virtual space is changed according to the change.
  • the character to be operated is made to perform a predetermined action.
  • the present invention has been made in view of the above points, and an object of the present invention is to provide a virtual space experience system capable of manipulating objects in a virtual space without reducing the immersive feeling.
  • the virtual space experience system of the present invention is A virtual space generator that creates a virtual space that corresponds to the real space in which the user exists and includes an avatar and a predetermined object corresponding to the user.
  • a user state recognition unit that recognizes the user's state
  • An avatar control unit that controls the state of the avatar according to the recognized state of the user
  • An object control unit that controls the state of the predetermined object
  • An image determination unit that determines an image of the virtual space to be recognized by the user based on the state of the avatar and the predetermined object.
  • a virtual space experience system including an image display that causes the user to recognize a determined image of the virtual space.
  • the predetermined object is located at a coordinate different from that of the first object in the first object and the virtual space, has a shape corresponding to at least a part of the first object, and has a shape corresponding to at least a part of the first object, through the movement of the avatar.
  • the object control unit is characterized in that when the second object is operated by the user, the state of the first object is changed according to the change in the state of the second object due to the operation.
  • the "state” refers to something that can change according to the intention or operation of the user. It refers to the operating state of the body or mechanism, the position of coordinates, posture, direction, shape, color, size, etc. Therefore, the “change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the "corresponding shape” includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like. include.
  • the state of the first object is changed according to the change in the state of the second object due to the operation.
  • the first object is the operation target
  • the second object is the operation medium.
  • the second object has a shape corresponding to at least a part of the first object. Therefore, the user can perform the same operation on the second object, which is the operation medium, as on the first object, which is the actual operation target.
  • the object control unit keeps at least one of the coordinates, the posture, and the direction of the state of the first object fixed, and the second object is operated by the operation. It is preferable to change the other state of the first object according to the change of the state.
  • the user can change the other states in correspondence with the second object. It becomes easier to observe the state of the first object while freely moving the two objects. As a result, the user can easily grasp the change in the state of the first object.
  • the virtual space experience system of the present invention It is equipped with an item state recognition unit that recognizes the state of items existing in the real space. It is preferable that the virtual space generation unit generates the second object in correspondence with the recognized item.
  • the second object is made to correspond to the first object to be operated. Can be freely generated.
  • the immersive feeling of the user in the virtual space may be reduced.
  • the second object which is the operation medium (that is, the object touched by the user U)
  • the operation medium that is, the object touched by the user U
  • the second object has the same or similar shape as the first object and is smaller than the first object.
  • the user can easily operate the avatar with respect to the second object.
  • the block diagram which shows the structure of the processing part of the VR system of FIG. The schematic diagram of the virtual space generated in the VR system of FIG.
  • the VR system S which is a virtual space experience system according to the embodiment, will be described with reference to the drawings.
  • the VR system S is a system that allows the user to experience virtual reality (so-called VR (virtual reality)) by recognizing that the user himself / herself exists in the virtual space.
  • VR virtual reality
  • the VR system S has a plurality of markers 1 attached to the user U and the item I existing in the real space RS, and the user U and the item I (strictly speaking, the markers 1 attached to them).
  • the camera 2 that captures the image
  • the server 3 that determines the image and sound of the virtual space VS (see FIG. 3 and the like) to be recognized by the user U
  • the head-mounted display that causes the player to recognize the determined image and sound (hereinafter, "" It is called "HMD4").
  • the camera 2, the server 3, and the HMD 4 can wirelessly transmit and receive information to and from each other.
  • any one of them may be configured to be able to send and receive information to and from each other by wire.
  • those attached to the user U are attached to the head, both hands, and both feet of the user U via the HMD4, gloves, and shoes worn by the user U.
  • those attached to the item I are attached to the positions that are the feature points in the image of the item I taken by the camera 2.
  • a plurality of rectangular parallelepiped building blocks placed on the plate are adopted as the item I, they are attached to the edges, the vicinity of the corners, and the like.
  • the sign 1 is used to recognize the posture, coordinates, and direction of the user U or the item I in the real space RS as described later. Therefore, the mounting position of the sign 1 may be appropriately changed according to other devices constituting the VR system S.
  • the camera 2 multi-directions the operable range of the user U and the item I in the real space RS in which the user U and the item I exist (that is, a range in which the posture change, the coordinate movement, the direction change, etc. can be executed). It is installed so that you can shoot from.
  • the server 3 recognizes the sign 1 from the image taken by the camera 2, and recognizes the posture, coordinates, and direction of the user U or the item I based on the position of the recognized sign 1 in the real space RS. Further, the server 3 determines the image and the sound to be recognized by the user U based on the posture, the coordinates and the direction.
  • the HMD4 is attached to the head of the user U. As shown in FIG. 2, the HMD 4 has a monitor 40 (image display) for causing the user U to recognize the image of the virtual space VS determined by the server 3, and a virtual space determined by the server 3. It has a speaker 41 (voice generator) for causing the user U to recognize the voice of VS.
  • a monitor 40 image display
  • speaker 41 voice generator
  • the VR system S When the user U is made to experience the virtual reality by using this VR system S, the user U is made to recognize only the image and the sound of the virtual space VS, and the user U himself exists in the virtual space as the avatar A. Be recognized. That is, the VR system S is configured as a so-called immersive system.
  • the VR system S includes a so-called motion capture device composed of a marker 1, a camera 2, and a server 3 as a system for recognizing the states of the user U and the item I in the real space RS.
  • the "state” refers to something that can change according to the intention or operation of the user U.
  • it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the virtual space experience system of the present invention is not limited to such a configuration.
  • a motion capture device in addition to the above configuration, one having a different number of signs and cameras from the above configuration (for example, one is provided for each) may be used.
  • a device other than the motion capture device may be used to recognize the state in the user's real space.
  • a sensor such as GPS may be mounted on the HMD, and the state of the player may be recognized based on the output from the sensor. Further, such a sensor may be used in combination with a motion capture device as described above.
  • the server 3 is composed of one or a plurality of electronic circuit units including a CPU, RAM, ROM, an interface circuit, and the like. As shown in FIG. 2, the server 3 has a virtual space generation unit 30, a user state recognition unit 31, an avatar control unit 32, and an item state recognition unit as functions realized by the implemented hardware configuration or program. 33, an object control unit 34, and an output information determination unit 35 are provided.
  • the virtual space generation unit 30 includes an image as a background of the virtual space VS corresponding to the real space RS in which the user U exists, an avatar 5 existing in the virtual space VS, and a predetermined object. Generate an image of. The virtual space generation unit 30 also generates sounds related to those images.
  • the movement range of the avatar 5 in the virtual space VS is the range corresponding to the room of the real space RS in which the user U exists. Therefore, the size of the image of the virtual space VS generated by the virtual space generation unit 30 (that is, the size of the virtual space VS) corresponds to the room of the real space RS.
  • the avatar 5 generated by the virtual space generation unit 30 is anthropomorphic to an animal, and operates in response to a human motion.
  • the virtual space generation unit 30 generates a plurality of avatars so as to correspond to each user U.
  • the predetermined object generated by the virtual space generation unit 30 includes a first object 6 and a second object 7 located at a coordinate different from that of the first object 6 in the virtual space VS. Specifically, the first object 6 is generated at a position far from the avatar 5 so that the user U can recognize the whole image through the avatar 5. The second object 7 is generated within the reach of the avatar 5.
  • the first object 6 is composed of the first building 61a, the second building 61b, and the third building 61c established on the base 60.
  • these buildings are collectively referred to as "building 61".
  • the second object 7 is composed of a first model 71a, a second model 71b, and a third model 71c mounted on a plate 70 corresponding to the base 60.
  • models 71 are collectively referred to as "model 71".
  • the second object 7 is a reduction of the first object 6 to the extent that it can be operated through the hand of the avatar 5. That is, the shape of the second object 7 is the same as that of the first object 6, but the size of the second object 7 is smaller than that of the first object 6.
  • the user state recognition unit 31 recognizes the state of the user U based on the image data of the user U including the sign 1 taken by the camera 2.
  • the user state recognition unit 31 has a user posture recognition unit 31a, a user coordinate recognition unit 31b, and a user direction recognition unit 31c.
  • the user posture recognition unit 31a, the user coordinate recognition unit 31b, and the user direction recognition unit 31c extract the marker 1 attached to the user U from the image data of the user U, and based on the extraction result, the posture of the user U, Recognize coordinates and directions.
  • the avatar control unit 32 controls the state of the avatar 5 (specifically, the change in posture, coordinates, and direction) according to the change in the state of the user U recognized by the user state recognition unit 31.
  • the item state recognition unit 33 recognizes the state of the item I based on the image data of the item I including the sign 1 taken by the camera 2.
  • the item state recognition unit 33 has an item posture recognition unit 33a, an item coordinate recognition unit 33b, and an item direction recognition unit 33c.
  • the item posture recognition unit 33a, the item coordinate recognition unit 33b, and the item direction recognition unit 33c extract the marker 1 attached to the item I from the image data of the item I, and based on the extraction result, the posture of the item I, Recognize coordinates and directions.
  • the object control unit 34 changes the states of the first object 6 and the second object 7 according to the operation of the avatar 5 recognized by the avatar control unit 32 and the state of the item I recognized by the item state recognition unit 33. Control.
  • the object control unit 34 has a first object control unit 34a and a second object control unit 34b.
  • the second object control unit 34b controls the state of the second object 7 according to the operation of the avatar 5 or the change of the state of the item I.
  • the first object control unit 34a controls the state of the first object 6 according to the change of the state of the second object 7.
  • the posture, coordinates, and direction of the second object 7 correspond to the posture, coordinates, and direction of the item I. Further, the posture (direction of each building 61) and coordinates of the first object 6 correspond to the posture (direction of each model 71) and coordinates of the second object 7. On the other hand, the direction of the first object 6 (phase around the yaw axis of the base 60) is fixed and always constant regardless of the direction of the second object 7 (phase around the yaw axis of the plate 70) (FIG. 5, FIG. 7 and 8).
  • the output information determination unit 35 determines the information regarding the virtual space VS to be recognized by the user U via the HMD4.
  • the output information determination unit 35 has an image determination unit 35a and an audio determination unit 35b.
  • the image determination unit 35a determines an image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the monitor 40 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. do.
  • the voice determination unit 35b is related to the image of the virtual space VS to be recognized by the user U corresponding to the avatar 5 via the speaker 41 of the HMD 4 based on the states of the avatar 5, the first object 6, and the second object 7. Determine the voice to be played.
  • each processing unit constituting the virtual space experience system of the present invention is not limited to the above configuration.
  • a part of the processing unit provided in the server 3 in the present embodiment may be provided in the HMD 4.
  • a plurality of servers may be used, or the servers may be omitted and the CPUs mounted on the HMD may be linked to each other.
  • a speaker other than the speaker mounted on the HMD may be provided. Further, in addition to the device that affects the visual sense and the auditory sense, the device that affects the sense of smell and the sense of touch that causes odor, wind, etc. according to the virtual space may be included.
  • the virtual space generation unit 30 of the server 3 generates the virtual space VS, the avatar 5 to exist in the virtual space VS, and various objects (FIG. 4 / STEP101).
  • the virtual space generation unit 30 generates an image as a background of the virtual space VS. Further, the virtual space generation unit 30 generates an image of the avatar 5 existing in the virtual space VS based on the image of the user U taken by the camera 2. Further, the virtual space generation unit 30 generates images of the first object 6 and the second object 7 based on the image of the item I taken by the camera 2.
  • the avatar control unit 32 of the server 3 determines the state of the avatar 5 in the virtual space VS based on the state of the user U in the real space RS (FIG. 4 / STEP102).
  • the user state recognition unit 31 recognizes the state of the user U in the real space RS based on the state of the user U taken by the camera 2.
  • the avatar control unit 32 determines the state of the avatar 5 based on the recognized state of the user U.
  • the object control unit 34 of the server 3 determines the states of the first object 6 and the second object 7 in the virtual space VS based on the state of the item I in the real space RS (FIG. 4 / STEP103).
  • the item state recognition unit 33 recognizes the state of the item I in the real space RS based on the state of the item I taken by the camera 2. After that, the first object control unit 34a of the object control unit 34 determines the state of the first object 6 based on the recognized state of the item I. Further, the second object control unit 34b of the object control unit 34 determines the state of the second object 7 based on the recognized state of the item I.
  • the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 4 / STEP104).
  • the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 4 / STEP105), and ends the current process. do.
  • the user U has the first object 6 to be operated at a position where the whole image can be seen, and the second object 7 which is an operation medium has a second object 7.
  • the virtual space VS which exists within the reach of the avatar 5, is in a state of being recognized as if it exists.
  • the first object 6 to be operated in the present embodiment is composed of a base 60 and a plurality of buildings 61 established on the base 60. Further, the second object 7 is mounted on a plate 70 having the same shape or a similar shape as the base 60, and a model 71 of a plurality of buildings having the same shape or a similar shape as the building 61. It is composed of and.
  • the "operation" for the second object 7 means the rotation of the entire second object 7 around the yaw axis by rotating the plate 70, and the plate by changing the coordinates and posture (direction with respect to the plate 70) of the model 71. 70 Refers to the layout change on.
  • the user state recognition unit 31 determines whether or not the state of the user U has changed (FIG. 6 / STEP201).
  • the avatar control unit 32 changes the state of the avatar 5 based on the change in the state of the user U (FIG. 6 / STEP202).
  • the object control unit 34 determines whether or not the change in the state of the avatar 5 (that is, the operation) executed by the avatar control unit 32 is an operation such as operating the second object 7 which is an operation medium. (Fig. 6 / STEP203).
  • the object control unit 34 determines whether or not the change in the posture, coordinates, and direction of the avatar 5 with respect to the second object 7 in the virtual space VS corresponds to the change in the predetermined posture, coordinates, and direction. It is determined whether or not the operation of the avatar 5 is an operation such as operating the second object 7 which is an operation medium.
  • the second object control unit 34b of the object control unit 34 controls the second object based on the operation by the avatar 5.
  • the posture, coordinates, and direction of No. 7 are changed (FIG. 6 / STEP204).
  • the first object control unit 34a of the object control unit 34 changes the posture and coordinates of the first object 6 based on the change of the posture and coordinates of the second object 7 (FIG. 6 / STEP205).
  • the second object 7 is operated by the avatar 5, as shown in FIG. It is assumed that the positions of the first model 71a and the positions of the second model 71b included in the second object 7 are replaced, and their orientations are adjusted. That is, it is assumed that the coordinates and postures (positions or orientations thereof with respect to the plate 70) of the first model 71a and the second model 71b are changed.
  • the coordinates and orientations of 61b change from the state shown in FIG. 5 to the state shown in FIG. 7.
  • the direction (yaw) of the plate 70 included in the second object 7 is as shown in FIG. 8 from the state before the operation medium, the second object 7, is operated by the avatar 5. It is assumed that the (phase around the axis) is changed to change the direction of the entire second object 7, and then the coordinates of the third model 71c are changed.
  • the direction (phase around the yaw axis) of the base 60 corresponding to the plate 70 is the direction of the plate 70 of the second object 7, regardless of the change in the direction of the plate 70. It does not change from the state shown in the above, and only the coordinates of the third building 61c corresponding to the third model 71c (the position relative to the base 60 or the first building 61a or the second building 61b) change. That is, the direction of the first object 6 with respect to the user U does not change.
  • the image and voice that the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 make the user U recognize are the images and sounds of the avatar 5, the first object 6, and the second object 7 in the virtual space VS. Determined based on the condition (FIG. 6 / STEP206).
  • the image determination unit 35a and the voice determination unit 35b of the output information determination unit 35 of the server 3 are used by the user U.
  • the image and sound to be recognized by the server are determined based on the states of the avatar 5, the first object 6, and the second object 7 in the virtual space VS (FIG. 6 / STEP206).
  • the HMD 4 worn by the user U displays the determined image on the monitor 40, generates the determined voice on the speaker 41 (FIG. 6 / STEP207), and ends the current process. do.
  • the VR system S repeatedly executes the above processing in a predetermined control cycle until the end instruction by the user U is recognized.
  • the first object is changed according to the change in the state of the second object 7 due to the operation. It is configured to change the state of 6. That is, the first object 6 is the operation target, and the second object 7 is the operation medium.
  • the second object 7 has a shape corresponding to the first object 6. Therefore, the user U can perform the same operation on the second object 7, which is the operation medium, as on the first object 6, which is the actual operation target.
  • the direction of the first object 6 and the direction of the second object 7 are completely synchronized, the direction of the first object 6 is frequently switched, and the user U can use the user U. On the contrary, it may be difficult to grasp the change in the state of the first object 6 as a whole.
  • the posture and coordinates of the state of the first object 6 are configured to be changed according to the second object 7 while the direction is fixed. This makes it easier for the user U to observe the state of the first object 6 while freely moving the second object 7. As a result, the user U can easily grasp the change in the state of the first object 6.
  • the user U sets the avatar 5 corresponding to the user U himself. 2 It may be difficult to operate on the object 7.
  • the second object 7 has the same shape as or similar to that of the first object 6, and is smaller than the first object 6 so that it can be operated by the hand of the avatar 5. Has been done. As a result, the user U can easily operate the avatar 5 with respect to the second object 7.
  • the shape of the second object 7 which is the operation medium is the same as that of the first object 6 which is the operation target, but the size of the second object 7 is smaller than that of the first object 6. ing.
  • the second object of the present invention is not limited to such a configuration, is located at coordinates different from the first object in the virtual space, and has a shape corresponding to at least a part of the first object. It suffices as long as it can be operated by the user through the operation of the avatar.
  • the "corresponding shape” includes not only the same shape, but also a similar shape (that is, a shape having the same shape but a different size), a deformed shape, the same shape as a partially cut out shape, and the like.
  • the first object and the second object may have the same size as well as the shape.
  • the second object may be generated by imitating only a part of the first object.
  • the first object may be the entire car, and the second object may be only the part for designing the car (for example, the internal structure of the bonnet).
  • one second object 7 which is an operation medium is generated for one first object 6 which is an operation target.
  • the object of the present invention is not limited to such a configuration.
  • a plurality of second objects as operation media may be generated for six first objects to be operated.
  • only one second object may be generated for a plurality of first objects.
  • the posture and coordinates of the first object 6 which is the operation target are changed according to the change of the posture and coordinates of the second object 7 which is the operation medium.
  • the direction of the first object 6 is fixed regardless of the change in the direction of the second object 7.
  • the change in the state of the first object in the present invention is not limited to such a configuration, and may be any one corresponding to the change in the state of the second object.
  • the "state” in the present invention refers to a state that can change according to the intention or operation of the user U.
  • it refers to the operating state of the body or mechanism, the position of coordinates, the posture, the direction, the shape, the color, the size, and the like. Therefore, the "change” of the state refers to the start, progress and stop of the operation, the movement of coordinates, the posture, the direction, the shape, the color, the change in size, and the like.
  • the direction of the first object 6 may also be changed according to the change in the state of the second object 7.
  • the shape, color, size, etc. of the first object may be changed according to the state of the second object.
  • the changes do not necessarily have to be in perfect agreement.
  • the state of the first object may be changed later than the change of the state of the second object.
  • only a part of the plurality of states may be changed in correspondence with each other. It should be noted that, by the user's operation by the operation of the avatar or the like, it may be possible to switch between the case where only a part is made to correspond and the case where the change is made and the case where the whole is made to correspond.
  • the virtual space generation unit 30 generates the second object 7 corresponding to the recognized item I, and the second object control unit 34b is the item I recognized by the item state recognition unit 33.
  • the state of the second object 7 is controlled according to the state of.
  • the virtual space experience system of the present invention is not limited to such a configuration, and the item corresponding to the second object may not exist.
  • the second object which is the operation medium, can be freely generated in correspondence with the first object, which is the operation target.
  • control of the first object and the second object cannot be performed based on the state of the item as in the above embodiment, so that the control thereof is the operation of the avatar (and by extension, the avatar). It suffices to control only according to the change in the state of the user, which is the basis of the control of the above.
  • speaker voice generator
  • 60 base, 61 ... building, 61a ... first building, 61b ... second building, 61c ... third building, 70 ... plate, 71 ... model, 71a ... first model, 71b ... 2nd model, 71c ... 3rd model, I ... item, U ... user, RS ... real space, S ... VR system (virtual space experience system), VS ... virtual space.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un système de sensation corporelle d'espace virtuel avec lequel il est possible de faire fonctionner un objet dans un espace virtuel sans réduire la sensation d'immersion. Un système de réalité virtuelle (RV) S comprend : une unité de commande d'avatar 32 qui commande l'état d'un avatar dans l'espace virtuel en fonction de l'état d'un utilisateur ; et une unité de commande d'objet 34 qui commande un premier objet et un second objet dans l'espace virtuel. Le second objet est positionné à des coordonnées différentes du premier objet dans l'espace virtuel, a une forme correspondant au premier objet et peut être actionné par l'utilisateur par l'intermédiaire des actions de l'avatar. L'unité de commande d'objet 34 modifie l'état du premier objet en fonction des changements de l'état du second objet en fonction d'une opération effectuée par l'utilisateur.
PCT/JP2020/020547 2020-05-25 2020-05-25 Système de sensation corporelle d'espace virtuel WO2021240601A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2020/020547 WO2021240601A1 (fr) 2020-05-25 2020-05-25 Système de sensation corporelle d'espace virtuel
JP2021520432A JPWO2021240601A1 (fr) 2020-05-25 2020-05-25

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/020547 WO2021240601A1 (fr) 2020-05-25 2020-05-25 Système de sensation corporelle d'espace virtuel

Publications (1)

Publication Number Publication Date
WO2021240601A1 true WO2021240601A1 (fr) 2021-12-02

Family

ID=78723224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/020547 WO2021240601A1 (fr) 2020-05-25 2020-05-25 Système de sensation corporelle d'espace virtuel

Country Status (2)

Country Link
JP (1) JPWO2021240601A1 (fr)
WO (1) WO2021240601A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024095871A1 (fr) * 2022-11-02 2024-05-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Procédé, serveur et dispositif d'imagerie

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001249747A (ja) * 2000-03-03 2001-09-14 Nec Corp 情報表示装置、情報表示方法及び情報表示プログラムを記録した記録媒体
JP2017199237A (ja) * 2016-04-28 2017-11-02 株式会社カプコン 仮想空間表示システム、ゲームシステム、仮想空間表示プログラムおよびゲームプログラム
JP2018049629A (ja) * 2017-10-10 2018-03-29 株式会社コロプラ 仮想空間において入力を支援するための方法および装置、ならびに当該方法をコンピュータに実行させるプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001249747A (ja) * 2000-03-03 2001-09-14 Nec Corp 情報表示装置、情報表示方法及び情報表示プログラムを記録した記録媒体
JP2017199237A (ja) * 2016-04-28 2017-11-02 株式会社カプコン 仮想空間表示システム、ゲームシステム、仮想空間表示プログラムおよびゲームプログラム
JP2018049629A (ja) * 2017-10-10 2018-03-29 株式会社コロプラ 仮想空間において入力を支援するための方法および装置、ならびに当該方法をコンピュータに実行させるプログラム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024095871A1 (fr) * 2022-11-02 2024-05-10 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Procédé, serveur et dispositif d'imagerie

Also Published As

Publication number Publication date
JPWO2021240601A1 (fr) 2021-12-02

Similar Documents

Publication Publication Date Title
JP6912661B2 (ja) 検出された手入力に基づく仮想手ポーズのレンダリング
EP3425481B1 (fr) Dispositif de commande
CN102356373B (zh) 虚拟对象操纵
TWI412392B (zh) 互動式娛樂系統及其操作方法
JP5639646B2 (ja) スケルトン・データーのゲーム・アバターへのリアル・タイム・リターゲティング
CN102129293B (zh) 在运动捕捉系统中跟踪用户组
JP2022549853A (ja) 共有空間内の個々の視認
US20160225188A1 (en) Virtual-reality presentation volume within which human participants freely move while experiencing a virtual environment
US20170039986A1 (en) Mixed Reality Social Interactions
WO2018118266A1 (fr) Téléprésence de multiples utilisateurs dans un espace virtuel interactif
JP2010257461A (ja) ネットワークゲーム用の共有ゲーム空間を創出する方法およびシステム
US20080225041A1 (en) Method and System for Vision-Based Interaction in a Virtual Environment
JP5116679B2 (ja) 強度のコンピュータ画像および音声処理、ならびにコンピュータプログラムとインタフェースするための入力装置
WO2008065458A2 (fr) Système et procédé pour déplacer des objets réels par des opérations mises en œuvre dans un environnement virtuel
WO2019087564A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, et programme
US11334165B1 (en) Augmented reality glasses images in midair having a feel when touched
CN110140100B (zh) 三维增强现实对象用户界面功能
WO2020201998A1 (fr) Transition entre une scène de réalité augmentée et une représentation de réalité virtuelle
WO2021240601A1 (fr) Système de sensation corporelle d'espace virtuel
WO2021261595A1 (fr) Système d'apprentissage par vr pour un aéronef, procédé d'apprentissage par vr pour un aéronef, et programme d'apprentissage par vr pour un aéronef
JP6933849B1 (ja) 体感型インターフェースシステム、及び、動作体感システム
CN110363841B (zh) 一种虚拟驾驶环境中手部运动跟踪方法
JP6341096B2 (ja) 触力覚提示装置、情報端末、触力覚提示方法、およびコンピュータ読み取り可能な記録媒体
JP7104539B2 (ja) シミュレーションシステム及びプログラム
JP6933850B1 (ja) 仮想空間体感システム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2021520432

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20938191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20938191

Country of ref document: EP

Kind code of ref document: A1