WO2023032264A1 - 情報処理装置、情報処理方法、およびプログラム - Google Patents
情報処理装置、情報処理方法、およびプログラム Download PDFInfo
- Publication number
- WO2023032264A1 WO2023032264A1 PCT/JP2022/006581 JP2022006581W WO2023032264A1 WO 2023032264 A1 WO2023032264 A1 WO 2023032264A1 JP 2022006581 W JP2022006581 W JP 2022006581W WO 2023032264 A1 WO2023032264 A1 WO 2023032264A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- action
- avatar
- virtual object
- information processing
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 46
- 238000003672 processing method Methods 0.000 title claims abstract description 7
- 230000009471 action Effects 0.000 claims description 152
- 238000004891 communication Methods 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000006399 behavior Effects 0.000 description 69
- 230000033001 locomotion Effects 0.000 description 38
- 238000010586 diagram Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 12
- 238000000034 method Methods 0.000 description 10
- 238000010295 mobile communication Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 239000000047 product Substances 0.000 description 3
- 230000036772 blood pressure Effects 0.000 description 2
- 230000036760 body temperature Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003183 myoelectrical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 235000019640 taste Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/216—Input arrangements for video game devices characterised by their sensors, purposes or types using geographical information, e.g. location of the game device or player using GPS
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/75—Enforcing rules, e.g. detecting foul play or generating lists of cheating players
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/80—Special adaptations for executing a specific game genre or game mode
- A63F13/825—Fostering virtual characters
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/90—Constructional details or arrangements of video game devices not provided for in groups A63F13/20 or A63F13/25, e.g. housing, wiring, connections or cabinets
- A63F13/92—Video game devices specially adapted to be hand-held while playing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Definitions
- the present disclosure relates to an information processing device, an information processing method, and a program.
- Avatars are represented by, for example, two-dimensional or three-dimensional CG (Computer Graphics).
- Patent Document 1 discloses a technique for reflecting the actions and objects held in the real world by a participant who participates in communication in the avatar of the participant in the virtual space.
- unnatural phenomena such as the avatar (virtual object) disappearing from the virtual space, or the avatar not moving at all in the virtual space. may be passed on to other users.
- the present disclosure proposes an information processing device, an information processing method, and a program capable of causing a virtual object associated with a user in a virtual space to behave more naturally even when the user does not operate. .
- the controller includes a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and the control unit controls the user during a non-operation period of the user.
- the present invention proposes an information processing apparatus that generates behavior of the virtual object based on sensing data in a real space and performs control to reflect the behavior on the virtual object.
- the processor includes controlling the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and further, during a period in which the user is not operating, An information processing method is proposed, including generating behavior of the virtual object based on sensing data in real space and performing control to reflect the behavior on the virtual object.
- a computer is caused to function as a control unit that controls the behavior of a virtual object associated with the user in a virtual space in accordance with a user's operation, and the control unit operates during a period in which the user is not operating (2) proposes a program that generates behavior of the virtual object based on sensing data of the user in the real space and performs control to reflect the behavior on the virtual object.
- FIG. 1 is a diagram describing an overview of an information processing system according to an embodiment of the present disclosure
- FIG. FIG. 4 is a diagram illustrating how the user's sensing data in the real space is reflected in the behavior of the user's avatar in the virtual space according to the present embodiment
- It is a block diagram which shows an example of a structure of the user terminal by this embodiment.
- It is a block diagram which shows an example of a structure of the management server by this embodiment.
- 4 is a sequence diagram showing an example of the flow of operation processing according to the embodiment
- FIG. FIG. 10 is a diagram illustrating an example of expression of an avatar's autonomous action restricted according to the privacy level according to the present embodiment; It is a block diagram explaining generation
- An information processing system relates to control of a virtual object that is associated with a user in a virtual space and serves as an alter ego of the user.
- a virtual object that serves as an alter ego of a user is, for example, a humanoid or non-human character represented by two-dimensional or three-dimensional CG, and is also called an avatar.
- communication in virtual space has become widespread, and not only simple communication such as games and conversations, but also various communication such as live distribution of artists and trading of in-game contents such as 3D models. ing.
- FIG. 1 is a diagram explaining an overview of an information processing system according to an embodiment of the present disclosure.
- the information processing system according to the present embodiment includes one or more user terminals 10 (10A, 10B, 10C, . and including.
- the user terminal 10 is an information processing terminal used by the user.
- the user terminal 10 transmits information of operation input by the user and sensing data to the virtual space server 20 .
- the user terminal 10 performs control for displaying the video of the user's viewpoint in the virtual space received from the virtual space server 20 .
- the user's point of view may be the point of view of the user's avatar in the virtual space, or the point of view of the view including the appearance of the avatar.
- the user terminal 10 can be realized by a smartphone, a tablet terminal, a PC (personal computer), an HMD (Head Mounted Display) worn on the head, a projector, a television device, a game machine, or the like.
- the HMD may have a non-transmissive display that covers the entire field of view, or may have a transmissive display.
- Examples of HMDs having a non-transmissive display unit include glasses-type devices having a so-called AR (Augmented Reality) display function that superimposes and displays a virtual object in real space.
- the HMD may be a device capable of arbitrarily switching the display unit between a non-transmissive type and a transmissive type.
- the user can experience virtual space through VR (Virtual Reality).
- the display unit of the HMD includes a left-eye display and a right-eye display, allowing the user to stereoscopically view an image from the user's viewpoint in the virtual space, thereby providing a more realistic sense of immersion in the virtual space.
- the virtual space server 20 is an information processing device that generates and controls virtual space, and generates and distributes video from arbitrary viewpoints in virtual space.
- the virtual space server 20 may be realized by a single device or by a system composed of a plurality of servers.
- Various 2D or 3D virtual objects are placed in the virtual space.
- An example of a virtual object is each user's avatar.
- the virtual space server 20 can control each user's avatar in real time based on information received from each user terminal 10 .
- Each user can view video from a user's point of view (for example, a user's avatar's point of view) in virtual space using the user terminal 10 and communicate with other users via the avatar.
- the virtual space server 20 can also control the transmission of the voice received from the user terminal 10 (user's uttered voice) to the other user terminal 10 corresponding to another user avatar near the user avatar. This enables voice conversations between avatars in the virtual space. Conversation between avatars is not limited to voice, and may be conducted in text.
- the avatar (virtual object) placed in the virtual space is operated by the user in real time. You may end up in a state where you can't move at all. If such a phenomenon that can be said to be unnatural in the real space occurs, there is a risk that other users will feel uncomfortable with the virtual space. In particular, in the case of the Metaverse, which is used as a second living space, it is not desirable for the avatar to suddenly disappear or become completely motionless.
- the avatar which is a virtual object associated with the user in the virtual space, to behave more naturally even when the user does not operate.
- the virtual space server 20 avoids an unnatural state by causing the avatar to act autonomously while the user is not operating.
- uniform behavior control by a simple autopilot is not sufficient for a more natural expression of avatar behavior.
- avatar behavior associated with the user is generated based on the user's sensing data in the real space, and control is performed to reflect the avatar behavior.
- control is performed to reflect the avatar behavior.
- more natural autonomous behavior of the avatar is realized, and the user's behavior in the real space is reflected in the user's avatar, so that the user's own state in the virtual space is compared with the state in the real space. It is possible to reduce the user's sense of incongruity with the autonomous action of the avatar.
- FIG. 2 is a diagram explaining how the user's sensing data in the real space is reflected in the behavior of the user's avatar 4 in the virtual space according to this embodiment.
- various sensors sense the user's state while shopping in the real space.
- the virtual space server 20 generates “shopping behavior” from sensing data and reflects it on the behavior of the avatar 4 .
- the virtual space server 20 controls the behavior of the avatar 4 to purchase a specified product at a specified store in the virtual space.
- the product to be purchased may be a product that the user has previously added to the planned purchase/favorite list, or may be appropriately determined based on the user's tastes and preferences, action history in the virtual space, tasks, and the like.
- a predetermined item may be purchased free of charge as a reward from the service side for the autonomous action.
- Shopping behaviors that do not involve actual purchases, such as avatars entering and exiting stores in virtual space and window shopping around a number of stores, may also be used.
- the avatar in the virtual space autonomously performs natural actions, which reduces the discomfort felt by other users.
- the user's behavior in the real space is reflected in the user's avatar, the user's sense of incongruity with respect to the autonomous behavior of the avatar is reduced.
- FIG. 3 is a block diagram showing an example of the configuration of the user terminal 10 according to this embodiment.
- the user terminal 10 has a communication section 110 , a control section 120 , an operation input section 130 , a motion sensor 140 , a positioning section 150 , a display section 160 , an audio output section 170 and a storage section 180 .
- the user terminal 10 may be implemented by, for example, a wearable device such as a transparent or non-transparent HMD, smart phone, tablet terminal, smart watch, or smart band.
- the communication unit 110 communicates with the virtual space server 20 by wire or wirelessly to transmit and receive data.
- the communication unit 110 is, for example, a wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), infrared communication, or a mobile communication network (4G (fourth generation mobile communication system), Communication using 5G (fifth generation mobile communication system) or the like can be performed.
- control unit 120 functions as an arithmetic processing device and a control device, and controls overall operations within the user terminal 10 according to various programs.
- the control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like.
- the control unit 120 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
- ROM Read Only Memory
- RAM Random Access Memory
- the control unit 120 performs control to display on the display unit 160 the video from the user's viewpoint in the virtual space, which is transmitted (for example, streamed) from the virtual space server 20 .
- the control unit 120 also controls the reproduction of the audio signal transmitted from the virtual space server 20 together with the video of the user's viewpoint from the audio output unit 170 .
- the control unit 120 controls transmission of information acquired by the operation input unit 130 , the motion sensor 140 , and the positioning unit 150 from the communication unit 110 to the virtual space server 20 . For example, various operation information is input from the operation input unit 130 and transmitted to the virtual space server 20 as input information for user operations on the virtual space.
- motion data acquired by the motion sensor 140 can be transmitted to the virtual space server 20 as information for controlling the position and posture (orientation of the face, etc.) of the avatar.
- a device device used to operate the avatar
- a device for the user's real space used to generate the autonomous behavior of the avatar during the non-operating period of the avatar.
- the control unit 120 also functions as a state recognition unit 121.
- the state recognition unit 121 recognizes the user's state based on the user's sensing data acquired by the motion sensor 140 .
- the state of the user is, for example, walking, running, standing, sitting, or sleeping.
- the state of the user recognized by the state recognition unit 121 is transmitted to the virtual space server 20 by the control unit 120, and used in the virtual space server 20 when generating the autonomous action of the avatar.
- the location information acquired by the location positioning unit 150 is also used in the virtual space server 20 when generating autonomous actions of the avatar.
- the control unit 120 may transmit the location information to the virtual space server 20 together with the state of the user, or may transmit the location information when a change (movement) in the location is detected.
- control unit 120 may transmit the user's state and location information to the virtual space server 20 during a non-operation period in which the user does not operate the avatar in the virtual space. Also, the control unit 120 may specify the name of the place where the user is by combining the positional information and the map information, and transmit the name to the virtual space server 20 .
- the place name may be a general name. For example, if it is specified that the user is in "XX park in XX city", the user may simply be notified of "park”. This may protect the user's privacy. Map information may be stored in the storage unit 180 in advance.
- the map information is not limited to outdoor map information, and includes indoor map information such as inside a school, inside a company, inside a department store, inside one's home, and the like.
- Control unit 120 can also identify which room the user is in, such as a bedroom or a living room, from the position information.
- control unit 120 is obtained by an external sensor (for example, a camera installed around the user, a motion sensor attached to the user separately from the user terminal 10, etc.) obtained from the communication unit 110
- the user's sensing data may be used for recognizing the user's state and specifying the location, or may be transmitted to the virtual space server 20 as it is.
- control unit 120 may transmit the operation information received from the controller held by the user to the virtual space server 20 .
- Operation input unit 130 receives an operation instruction from the user and outputs the operation content to control unit 120 .
- the operation input unit 130 may be, for example, a touch sensor, a pressure sensor, or a proximity sensor.
- the operation input unit 130 may be a physical configuration such as buttons, switches, and levers.
- the motion sensor 140 has a function of sensing the motion of the user. More specifically, motion sensor 140 may have an acceleration sensor, an angular velocity sensor, and a geomagnetic sensor. Furthermore, the motion sensor 140 may be a sensor capable of detecting a total of 9 axes, including a 3-axis gyro sensor, a 3-axis acceleration sensor, and a 3-axis geomagnetic sensor.
- the motion of the user includes motion of the user's body and motion of the head. More specifically, the motion sensor 140 senses the movement of the user terminal 10 worn by the user as the movement of the user. For example, when the user terminal 10 is configured by an HMD and worn on the head, the motion sensor 140 can sense the movement of the user's head.
- the motion sensor 140 can sense the movement of the user's body.
- the motion sensor 140 may be a wearable device configured separately from the user terminal 10 and worn by the user.
- the positioning unit 150 has a function of acquiring the current position of the user. In this embodiment, it is assumed that the user possesses the user terminal 10, and the position of the user terminal 10 is regarded as the current position of the user.
- the positioning unit 150 calculates the absolute or relative position of the user terminal 10 .
- the position positioning unit 150 may position the current position based on an acquired signal from the outside.
- a GNSS Global Navigation Satellite System
- GNSS Global Navigation Satellite System
- a method of detecting a position by transmitting/receiving with Wi-Fi (registered trademark), Bluetooth (registered trademark), a mobile phone/PHS/smartphone, or the like, short-distance communication, or the like may be used.
- the positioning unit 150 may estimate information indicating relative changes based on the detection results of an acceleration sensor, an angular velocity sensor, or the like.
- the positioning unit 150 can perform outdoor positioning and indoor positioning using the various methods described above.
- the position may include an altitude.
- Position positioning unit 150 may include an altimeter.
- the display unit 160 has a function of displaying a video (image) of the user's viewpoint in the virtual space.
- the display unit 160 may be a display panel such as a liquid crystal display (LCD) or an organic EL (Electro Luminescence) display.
- Audio output section 170 outputs an audio signal under the control of control section 120 .
- the audio output unit 170 may be configured as headphones, earphones, or bone conduction speakers, for example.
- the storage unit 180 is implemented by a ROM (Read Only Memory) that stores programs, calculation parameters, and the like used in the processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
- the storage unit 180 according to this embodiment may store, for example, an algorithm for state recognition.
- the configuration of the user terminal 10 has been specifically described above, the configuration of the user terminal 10 according to the present disclosure is not limited to the example shown in FIG.
- the user terminal 10 may be realized by multiple devices.
- the motion sensor 140 or the positioning unit 150 and the control unit 120 may be configured separately.
- the user terminal 10 may further have various sensors.
- the user terminal 10 has a camera, a microphone, a biosensor (detection unit for pulse, heart rate, perspiration, blood pressure, body temperature, respiration, myoelectric value, electroencephalogram, etc.), a gaze detection sensor, a distance measurement sensor, etc., and obtains The information obtained may be transmitted to the virtual space server 20 .
- the state recognition unit 121 may recognize the user's state (running, walking, sleeping, etc.) in consideration of not only motion data but also biometric data acquired by a biosensor, for example.
- the control unit 120 may analyze the captured image around the user acquired by the camera, and specify the user's position (the name of the place where the user is).
- FIG. 4 is a block diagram showing an example of the configuration of the virtual space server 20 according to this embodiment. As shown in FIG. 4, the virtual space server 20 has a communication section 210, a control section 220, and a storage section 230. FIG.
- the communication unit 210 transmits and receives data to and from an external device by wire or wirelessly.
- the communication unit 210 is, for example, wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), mobile communication network (LTE (Long Term Evolution), 4G (4th generation mobile communication method), 5G (fifth generation mobile communication method)), etc., to connect to the user terminal 10 for communication.
- control unit 220 functions as an arithmetic processing device and a control device, and controls overall operations within the virtual space server 20 according to various programs.
- the control unit 220 is implemented by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like.
- the control unit 220 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
- ROM Read Only Memory
- RAM Random Access Memory
- the control unit 220 also functions as an avatar behavior generation unit 221 and an avatar control unit 222.
- the avatar behavior generation unit 221 has a function of generating the avatar's autonomous behavior based on the user's sensing data in the real space during the non-operation period.
- the avatar control unit 222 has a function of controlling the user's avatar according to the autonomous behavior generated by the avatar behavior generation unit 221 .
- the user's sensing data is, for example, at least one of the user's state and location information. Sensing data may be transmitted from the user terminal 10 .
- the information detected by the motion sensor 140 or the positioning unit 150 may be transmitted directly from the user terminal 10, or the recognition result recognized based on the information may be transmitted.
- a user's state can be recognized from the motion data, as described above. Such recognition may be performed by the user terminal 10 or by the control unit 220 of the virtual space server 20 .
- the location information may be the name of the place.
- by reflecting the user's state and position information in the real space in the avatar's autonomous action it is possible to reduce the user's sense of incongruity with respect to the avatar's autonomous action during the non-operation period.
- a user's sense of incongruity may occur in a familiar virtual space called the Metaverse when, for example, the user's avatar behaves arbitrarily and has nothing to do with his or her actual behavior. It is a sense of incongruity.
- the avatar action generation unit 221 may refer to a database of avatar actions based on the user state and position information obtained by sensing the user, for example, to generate autonomous actions of the avatar.
- An example of an avatar behavior database is shown in FIG.
- a database is used in which avatar actions are associated in advance with user states and positions. It can be said that states and positions (locations) are factors that constitute avatar behavior.
- the action “eating” is composed of the state of "sitting” and the position (place) factors of "living room at home” and "restaurant”.
- the factors that make up each avatar action are desirably factors in the user action that match the avatar action. For example, as shown in FIG.
- factors of the avatar behavior “sleeping” include “state: sleep” and “position: bedroom” that constitute the matching user behavior “sleeping”.
- the avatar action defined here is an action that the avatar can perform in the virtual space.
- the correspondence relationship between the avatar behavior shown in FIG. 5 and each factor such as state and position is an example, and the present embodiment is not limited to this.
- the avatar action generation unit 221 matches the user state and position information acquired from the user terminal 10 with the avatar action database, and calculates the matching rate. For example, if the information obtained from the user terminal 10 is "state: walking" and "position: shop", the matching rate for "shopping” among the avatar actions listed in the database of FIG. The precision rate of "defeating” is calculated as 50%. This is to satisfy the condition factor of the condition factor and position factor of "defeat the enemy". In this case, the avatar action generation unit 221 may determine the avatar action (here, "shopping") that completely corresponds (with a matching rate of 100%) as the autonomous action of the avatar.
- the avatar action generation unit 221 may determine the avatar action (here, "shopping") that completely corresponds (with a matching rate of 100%) as the autonomous action of the avatar.
- the avatar action generation unit 221 may stochastically determine the avatar action candidate including at least one of the corresponding state factor and position factor. At this time, the avatar action generation unit 221 increases the selection probability of candidates that include more suitable factors, thereby determining natural autonomous actions that are less uncomfortable with respect to user actions in the real space. can be done. Note that even when there is a completely corresponding avatar action (with a matching rate of 100%), the avatar action generation unit 221 generates a probabilistic You may make it decide to.
- the avatar action generation unit 221 selects probabilistically from a plurality of avatar action candidates that completely correspond (with a matching rate of 100%).
- the categories of states and categories of positions shown in FIG. 5 are examples, and the present embodiment is not limited to these.
- the avatar behavior database according to this embodiment can be reused in multiple virtual spaces for different services. That is, a database such as that shown in FIG. 5 may be used in generating an autonomous action of an avatar during a period of non-user operation in another virtual space. Information in the database can be shared by cooperation between virtual spaces. In addition, if there is an inappropriate autonomous action or an insufficient autonomous action in each virtual space, the avatar action defined in each virtual space can be corrected, changed, or added as appropriate to create an appropriate behavior for each virtual space. It is possible to create a database. For example, an avatar action of "defeating an enemy" may be changed to "cultivating a field" without changing the constituent factors. It is also possible for the user's avatar to move to another virtual space by cooperation between the virtual spaces.
- the user terminal 10 may transmit user sensing data to multiple virtual space servers 20 . This makes it possible to refer to databases in each of a plurality of virtual spaces and reflect the user behavior in the real space on the autonomous behavior of the user's avatar.
- the method is not particularly limited.
- the detected information may be applied to the avatar in the virtual space as it is.
- the "user's sensing data" to be reflected in the avatar is not limited to the state or position.
- the "state” is not limited to states recognized based on motion data. For example, the state may be recognized based on the user's uttered voice (conversation) picked up by the microphone of the user terminal 10 or biological information (heart rate, blood pressure, body temperature, etc.).
- the storage unit 230 is implemented by a ROM (Read Only Memory) that stores programs and calculation parameters used in the processing of the control unit 220, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate. According to this embodiment, the storage unit 230 stores information on the virtual space.
- ROM Read Only Memory
- RAM Random Access Memory
- the configuration of the virtual space server 20 has been specifically described above, the configuration of the virtual space server 20 according to the present disclosure is not limited to the example shown in FIG.
- the virtual space server 20 may be realized by multiple devices.
- FIG. 6 is a sequence diagram showing an example of the flow of operation processing according to this embodiment. Note that the processing shown in FIG. 6 is performed during a non-operation period in which the user does not operate the avatar (for example, when the user logs out or closes the screen displaying the image of the virtual space, and the operation is not performed for a certain period of time). case, etc.).
- the user terminal 10 first acquires the movement and position of the user from each sensor (step S103). Specifically, the motion sensor 140 acquires the motion of the user, and the position measurement unit 150 acquires position information.
- the state recognition unit 121 of the user terminal 10 recognizes the user's state based on the user's movement (motion data) (step S106).
- the user terminal 10 transmits the location information (which may be the general name of the location) and the recognition result of the state (step S109).
- the virtual space server 20 generates the user's avatar behavior based on the location information and the status recognition result received from the user terminal 10 (step S121).
- the avatar behavior generator 221 of the virtual space server 20 may generate avatar behavior based on at least one of the position information and the state recognition result.
- the avatar control unit 222 of the virtual space server 20 applies the avatar behavior generated (selected) by the avatar behavior generation unit 221 to the user's avatar and controls it (step S124).
- This allows the user's avatar to act autonomously even during a period in which the user is not operating, thereby reducing unnaturalness.
- by reflecting the behavior of the user in the real space in the autonomous behavior of the avatar it is possible to reduce the sense of incongruity and resistance of the user of his/her own avatar.
- the movement and position of the user are acquired by the user terminal 10 (or the wearable device worn by the user), thereby alleviating restrictions on the measurement range.
- the virtual space server 20 presets a privacy level for each action defined as, for example, an avatar's autonomous action, and shows (displays) the action up to a permitted level according to the familiarity between the user and other users.
- Table 1 below is an example of the privacy level set for the autonomous action of the avatar.
- a higher privacy level is set for "shopping" with high privacy, such as going out.
- the privacy level may be arbitrarily set by the user.
- the user determines to what level the avatar's behavioral expression is permitted (whether to be shown) to other users in the virtual space. Such permission may be individually set for each other user, or may be set for each group by grouping other users in advance. For example, users with close relationships are allowed up to the highest privacy level (e.g., level 3), and other users with no close relationships are allowed up to the lowest privacy level (e.g., level 0). You may allow it.
- the avatar action generation unit 221 can select the general-purpose action.
- a general-purpose action is, for example, an action that is randomly selected from autonomous action candidates defined in an avatar action database. Alternatively, the action may be randomly selected from a large number of autonomous action candidates prepared as general-purpose actions.
- FIG. 7 is a diagram explaining an example of expression of autonomous actions of avatars that are restricted according to the privacy level according to this embodiment.
- user B avatar 4b
- user C avatar 4c
- the virtual space server 20 refers to the database shown in FIG. 5 based on the data sensed from the user in the real space (state: walking, place: shop), and performs the autonomous action of "shopping" for the user's avatar 4a. to decide.
- Table 1 since the privacy level of "shopping" is "level 3", for user B who is permitted up to privacy level 3, the avatar 4a is "shopping".
- the virtual space server 20 determines how the autonomous behavior of the avatar 4a is displayed when generating the video of each user's viewpoint to be transmitted to the user terminals of the user B and the user C. control accordingly.
- the above general-purpose action may be a selection method based on a learning base using the action history of each avatar in the virtual space, in addition to the method of selecting at random.
- FIG. 8 is a configuration diagram illustrating generation of a general-purpose action according to a modified example of this embodiment.
- the avatar action history DB 182 shown in FIG. 8 is a database that accumulates the action history (including time axis information) of all avatars in the virtual space.
- the information accumulated in the avatar action history DB 182 may be, for example, avatar autonomous actions that reflect the user's actions in real space.
- the avatar action generation unit 221 When generating a general-purpose action, the avatar action generation unit 221 refers to the current time axis information and the avatar action history DB 182, and acquires information on the percentage of autonomous actions of each avatar at the corresponding time. Then, the avatar action generation unit 221 determines (probabilistically selects based on the ratio information) the action with a higher percentage as the general-purpose action. As a result, the avatars can be made to behave in the same manner as most avatars do, and the user's privacy can be protected in a more natural and unobtrusive manner.
- autonomous behavior was determined stochastically by calculating the matching rate between the user's sensing data (state, position) in the real space and each avatar behavior candidate in the database.
- the threshold of the relevance rate when calculating the relevance rate or adding noise when calculating the relevance rate, it is possible to select an appropriate autonomous action while protecting privacy. It is also possible to adjust the strength of privacy protection by adjusting the threshold value and adjusting the intensity of noise.
- the virtual space server 20 may set a reward for the avatar's autonomous action. For example, in the case of shopping actions, items that can be used in the virtual space are acquired, in the case of work or defeating enemies, experience points and currency are acquired in the virtual space, and in the case of actions at home , includes rewards such as recovery of physical strength used in virtual space.
- the avatar moves in the virtual space by autonomous action, the movement information and the image of the avatar's viewpoint are recorded, and when the user resumes the operation, the image etc. can be confirmed. may
- Such rewards can promote an increase in the number of users who use autonomous behavior control.
- the avatar action generation unit 221 of the virtual space server 20 can control the expression of the avatar's autonomous action according to the time zone of the viewing user, and reduce the unnaturalness caused by the different time zones.
- the virtual space server 20 prepares an avatar action history DB (including time axis information) for each time zone.
- an avatar action history DB including time axis information
- a matching action for example, "sleep" is extracted from the avatar action history DB.
- the virtual space server 20 may reflect the user's information in the real space on the appearance of the avatar.
- each candidate for the avatar action as described with reference to FIG. 5 may be associated with the appearance of the avatar.
- the avatar control unit 222 of the virtual space server 20 can appropriately change the appearance of the avatar when controlling the autonomous action of the avatar. Since each action corresponds to the appearance, the appearance can be changed in the same way when the action is generated in the case of privacy protection as described above (generation of general-purpose action).
- each user terminal 10 may generate a virtual space and generate and display an image of the user's viewpoint in the virtual space.
- Information for generating the virtual space is obtained in advance from the virtual space server 20 .
- each user terminal 10 transmits the information of the operation input by the user, sensing data, etc. to the virtual space server 20 in real time.
- the virtual space server 20 controls the transmission of the information regarding the movement of the user avatar received from the user terminal 10 to other user terminals 10 .
- the virtual space server 20 also transmits avatar autonomous control information as needed.
- a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation, The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
- Information processing equipment (2) The information processing apparatus according to (1), wherein the sensing data includes information on at least one of a user's state and position.
- the control unit generates the behavior of the virtual object by referring to a database that associates candidates for the behavior of the virtual object with at least one of one or more states or positions, and (1) or (2) above. ).
- the control unit calculates a matching rate between each action candidate defined in the database and the sensing data, and selects one action from the action candidates based on the matching rate.
- the control unit selects a general-purpose action as the action of the avatar.
- the information processing device according to (5), which performs control to generate an action.
- control unit randomly selects the general-purpose action from the candidates for each action.
- control unit generates the general-purpose action based on the action history of each avatar in the virtual space.
- control unit performs control to change the appearance of the virtual object to the appearance associated with the action to be generated when reflecting on the virtual object.
- the information processing device according to the item.
- control unit generates an image of the user's viewpoint in the virtual space and controls transmission to the user terminal.
- the information processing device further comprises a communication unit, The information processing device according to any one of (1) to (10), wherein the communication unit receives the sensing data from a user terminal.
- the processor Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation; Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
- Information processing methods Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation; Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
- the computer Functioning as a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
- the control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object. program.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Computer Security & Cryptography (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Radar, Positioning & Navigation (AREA)
- Environmental & Geological Engineering (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
1.本開示の一実施形態による情報処理システムの概要
2.構成例
2-1.ユーザ端末10の構成例
2-2.仮想空間サーバ20の構成例
3.動作処理
4.変形例
5.補足
本開示の一実施形態による情報処理システムは、仮想空間でユーザに対応付けられ、ユーザの分身となる仮想オブジェクトの制御に関する。ユーザの分身となる仮想オブジェクトは、例えば2次元または3次元のCGにより表現される人型または人以外のキャラクターであって、所謂アバターとも称される。近年、仮想空間におけるコミュニケーションが普及してきており、ゲームや会話といった単純なコミュニケーションに留まらず、アーティストのライブ配信や、3Dモデルなどのゲーム内のコンテンツの取引といったビジネス用途等の様々なコミュニケーションが行われている。また、今までは実世界で行われてきた展示会等の様々なイベントが、現地に足を運ぶことなく、仮想空間でアバターを利用して開催される潮流もあり、実空間に次ぐ第二の生活空間として注目されている。このような実空間を仮想化したインターネット上の仮想世界は、通称、メタバースとも称される。
仮想空間に配置されるアバター(仮想オブジェクト)はユーザによりリアルタイムで操作されるが、ユーザがログアウトした場合や操作しなくなった場合等、アバターが制御されなくなると、アバターが仮想空間から突然消えたり、全く動かない状態に陥ってしまったりする。このような、実空間であれば不自然と言える現象が発生すると、仮想空間への違和感を他のユーザへ与えてしまう恐れがある。特に第二の生活空間として利用されるメタバースの場合、アバターが突然消失したり、全く動かなくなるような不自然な状態は好ましくない。
<2-1.ユーザ端末10の構成例>
図3は、本実施形態によるユーザ端末10の構成の一例を示すブロック図である。図3に示すように、ユーザ端末10は、通信部110、制御部120、操作入力部130、モーションセンサ140、位置測位部150、表示部160、音声出力部170、記憶部180を有する。ユーザ端末10は、例えば透過型または非透過型のHMD、スマートフォン、タブレット端末、スマートウォッチやスマートバンド等のウェアラブルデバイスにより実現されてもよい。
通信部110は、有線または無線により、仮想空間サーバ20と通信接続してデータの送受信を行う。通信部110は、例えば有線/無線LAN(Local Area Network)、Wi-Fi(登録商標)、Bluetooth(登録商標)、赤外線通信、または携帯通信網(4G(第4世代の移動体通信方式)、5G(第5世代の移動体通信方式))等を用いた通信を行い得る。
制御部120は、演算処理装置および制御装置として機能し、各種プログラムに従ってユーザ端末10内の動作全般を制御する。制御部120は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。また、制御部120は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、及び適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
操作入力部130は、ユーザによる操作指示を受付け、その操作内容を制御部120に出力する。操作入力部130は、例えばタッチセンサ、圧力センサ、若しくは近接センサであってもよい。あるいは、操作入力部130は、ボタン、スイッチ、およびレバーなど、物理的構成であってもよい。
モーションセンサ140は、ユーザの動きをセンシングする機能を有する。より具体的には、モーションセンサ140は、加速度センサ、角速度センサ、および地磁気センサを有していてもよい。さらに、モーションセンサ140は、3軸ジャイロセンサ、3軸加速度センサ、および3軸地磁気センサの合計9軸を検出可能なセンサであってもよい。ユーザの動きとは、ユーザの身体の動きや頭部の動きが挙げられる。より具体的には、モーションセンサ140は、ユーザが身に着けるユーザ端末10の動きを、ユーザの動きとしてセンシングする。例えばユーザ端末10がHMDにより構成され、頭部に装着されている場合、モーションセンサ140は、ユーザの頭部の動きをセンシングできる。また、例えばユーザ端末10がスマートフォンにより構成され、ポケットや鞄に入れられた状態でユーザが出歩いた場合、モーションセンサ140は、ユーザの身体の動きをセンシングできる。また、モーションセンサ140は、ユーザ端末10と別体で構成され、ユーザに装着されるウェアラブルデバイスであってもよい。
位置測位部150は、ユーザの現在位置を取得する機能を有する。なお本実施形態では、ユーザがユーザ端末10を所持していることを前提とし、ユーザ端末10の位置をユーザの現在位置とみなす。
表示部160は、仮想空間におけるユーザ視点の映像(画像)を表示する機能を有する。例えば表示部160は、液晶ディスプレイ(LCD:Liquid Crystal Display)、有機EL(Electro Luminescence)ディスプレイなどの表示パネルであってもよい。
音声出力部170は、制御部120の制御に従って、音声信号を出力する。音声出力部170は、例えばヘッドフォン、イヤフォン、若しくは骨伝導スピーカとして構成されてもよい。
記憶部180は、制御部120の処理に用いられるプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)により実現される。本実施形態による記憶部180には、例えば状態認識のためのアルゴリズムが格納されていてもよい。
図4は、本実施形態による仮想空間サーバ20の構成の一例を示すブロック図である。図4に示すように、仮想空間サーバ20は、通信部210、制御部220、および記憶部230を有する。
通信部210は、有線または無線により外部装置とデータの送受信を行う。通信部210は、例えば有線/無線LAN(Local Area Network)、Wi-Fi(登録商標)、Bluetooth(登録商標)、携帯通信網(LTE(Long Term Evolution)、4G(第4世代の移動体通信方式)、5G(第5世代の移動体通信方式))等を用いて、ユーザ端末10と通信接続する。
制御部220は、演算処理装置および制御装置として機能し、各種プログラムに従って仮想空間サーバ20内の動作全般を制御する。制御部220は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。また、制御部220は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、及び適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
記憶部230は、制御部220の処理に用いられるプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)により実現される。本実施形態により記憶部230は、仮想空間の情報を格納する。
次に、本実施形態による仮想オブジェクトの処理の流れについて図面を用いて具体的に説明する。図6は、本実施形態による動作処理の流れの一例を示すシーケンス図である。なお、図6に示す処理は、ユーザがアバター操作を行っていない非操作の期間中(例えばログアウトした場合や、仮想空間の映像を表示する画面を閉じた場合、一定時間以上操作を行っていない場合等)に実施され得る。
<4-1.プライバシーを考慮したアバター行動の生成について>
上述した実施形態では、実空間におけるユーザの状態や位置を、アバター行動に反映させることができる。しかしながら、仮想空間では不特定多数の他ユーザとコミュニケーションを行うため、プライバシーの考慮も重要となる。上述した実施形態においても、具体的なユーザの現在位置が把握されるわけではないが、状況によっては、より厳密にプライバシーを考慮した上でアバターの自律行動を生成する必要がある場合も想定される。
上述した汎用行動は、ランダムに選択する方法以外にも、仮想空間における各アバターの行動履歴を利用した学習ベースに基づく選択方法であってもよい。図8は、本実施形態の変形例による汎用行動の生成について説明する構成図である。図8に示すアバター行動履歴DB182は、仮想空間における全アバターの行動の履歴(時間軸の情報を含む)を蓄積したデータベースである。アバター行動履歴DB182に蓄積される情報は、例えば実空間におけるユーザの行動が反映されたアバターの自律行動であってもよい。アバター行動生成部221は、汎用行動を生成する際、現在の時間軸情報と、アバター行動履歴DB182を参照し、対応する時間における各アバターの自律行動の割合情報を取得する。そしてアバター行動生成部221は、より割合の高い行動を汎用行動として決定(割合情報に基づいて確率的に選択)する。これにより、大多数のアバターが行う行動と同じ行動をアバターにさせることができ、より自然で目立たずに、ユーザのプライバシーも保護することができる。
仮想空間サーバ20は、アバターの自律行動に報酬を設定してもよい。例えば、買い物行動の場合は、仮想空間内で使うことが可能なアイテムの獲得、仕事や敵を倒す行動の場合は、仮想空間内での経験値や通貨等の獲得、自宅での行動の場合は、仮想空間内で利用する体力の回復、といった報酬が挙げられる。また、自律行動により仮想空間内をアバターが移動した場合、その移動情報やアバター視点の映像を記録し、ユーザによる操作が再開された際に、その映像等を確認することができるといった報酬であってもよい。
仮想空間は実空間と異なり、ユーザ間の距離の影響が少なく、世界中のユーザと容易にコミュニケーションを取ることが可能である。しかし、実空間のユーザの行動をアバターの自律行動に反映させた場合、異なるタイムゾーンのユーザによる統一性のないアバター行動が共存する場合がある。そこで、仮想空間サーバ20のアバター行動生成部221は、視聴ユーザのタイムゾーンに合わせて、アバターの自律行動の表現を制御し、タイムゾーンが違うことによる不自然さを低減することができる。
仮想空間サーバ20は、実空間におけるユーザの情報を、アバターの容姿に反映させてもよい。例えば、図5を参照して説明したようなアバター行動の各候補に、それぞれアバターの容姿を対応付けてもよい。例えば「就寝」であればパジャマ、「仕事」であればスーツ等である。仮想空間サーバ20のアバター制御部222は、アバターの自律行動を制御する際、その容姿も適宜変更することができる。各行動と容姿が対応しているため、上述したプライバシー保護の場合における行動生成の際にも(汎用行動の生成)、同様に容姿変更が行われ得る。
以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本技術はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
(1)
ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部を備え、
前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
情報処理装置。
(2)
前記センシングデータは、ユーザの状態および位置の少なくともいずれかに関する情報を含む、前記(1)に記載の情報処理装置。
(3)
前記制御部は、前記仮想オブジェクトの行動の候補と、1以上の状態または位置の少なくともいずれかを対応付けたデータベースを参照して、前記仮想オブジェクトの行動を生成する、前記(1)または(2)に記載の情報処理装置。
(4)
前記制御部は、前記参照において、前記データベースで定義される各行動の候補と、前記センシングデータとの適合率を算出し、当該適合率に基づいて前記各行動の候補から一の行動を選択することで、前記仮想オブジェクトの行動を生成する、前記(3)に記載の情報処理装置。
(5)
前記制御部は、前記各行動の候補に設定されたプライバシーレベルに応じて、前記仮想オブジェクトの行動を生成する、前記(3)または(4)に記載の情報処理装置。
(6)
前記制御部は、前記各行動の候補から選択された一の行動のプライバシーレベルが、前記仮想オブジェクトであるアバターを視聴する他のユーザに許可されていないレベルの場合、前記アバターの行動として、汎用行動を生成する制御を行う、前記(5)に記載の情報処理装置。
(7)
前記制御部は、前記汎用行動として、前記各行動の候補からランダムに選択する、前記(6)に記載の情報処理装置。
(8)
前記制御部は、仮想空間における各アバターの行動履歴に基づいて、前記汎用行動を生成する、前記(6)に記載の情報処理装置。
(9)
前記制御部は、前記仮想オブジェクトへの反映の際に、前記生成する行動に対応付けられる容姿に、前記仮想オブジェクトの容姿を変更する制御を行う、前記(1)~(8)のいずれか1項に記載の情報処理装置。
(10)
前記制御部は、前記仮想空間におけるユーザ視点の画像を生成し、ユーザ端末に送信する制御を行う、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(11)
前記情報処理装置は、さらに通信部を備え、
前記通信部は、前記センシングデータをユーザ端末から受信する、前記(1)~(10)のいずれか1項に記載の情報処理装置。
(12)
プロセッサが、
ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御することを含み、
さらに、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行うことを含む、
情報処理方法。
(13)
コンピュータを、
ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部として機能させ、
前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
プログラム。
110 通信部
120 制御部
121 状態認識部
130 操作入力部
140 モーションセンサ
150 位置測位部
160 表示部
170 音声出力部
180 記憶部
20 仮想空間サーバ
210 通信部
220 制御部
221 アバター行動生成部
222 アバター制御部
230 記憶部
Claims (13)
- ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部を備え、
前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
情報処理装置。 - 前記センシングデータは、ユーザの状態および位置の少なくともいずれかに関する情報を含む、請求項1に記載の情報処理装置。
- 前記制御部は、前記仮想オブジェクトの行動の候補と、1以上の状態または位置の少なくともいずれかを対応付けたデータベースを参照して、前記仮想オブジェクトの行動を生成する、請求項1に記載の情報処理装置。
- 前記制御部は、前記参照において、前記データベースで定義される各行動の候補と、前記センシングデータとの適合率を算出し、当該適合率に基づいて前記各行動の候補から一の行動を選択することで、前記仮想オブジェクトの行動を生成する、請求項3に記載の情報処理装置。
- 前記制御部は、前記各行動の候補に設定されたプライバシーレベルに応じて、前記仮想オブジェクトの行動を生成する、請求項3に記載の情報処理装置。
- 前記制御部は、前記各行動の候補から選択された一の行動のプライバシーレベルが、前記仮想オブジェクトであるアバターを視聴する他のユーザに許可されていないレベルの場合、前記アバターの行動として、汎用行動を生成する制御を行う、請求項5に記載の情報処理装置。
- 前記制御部は、前記汎用行動として、前記各行動の候補からランダムに選択する、請求項6に記載の情報処理装置。
- 前記制御部は、仮想空間における各アバターの行動履歴に基づいて、前記汎用行動を生成する、請求項6に記載の情報処理装置。
- 前記制御部は、前記仮想オブジェクトへの反映の際に、前記生成する行動に対応付けられる容姿に、前記仮想オブジェクトの容姿を変更する制御を行う、請求項1に記載の情報処理装置。
- 前記制御部は、前記仮想空間におけるユーザ視点の画像を生成し、ユーザ端末に送信する制御を行う、請求項1に記載の情報処理装置。
- 前記情報処理装置は、さらに通信部を備え、
前記通信部は、前記センシングデータをユーザ端末から受信する、請求項1に記載の情報処理装置。 - プロセッサが、
ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御することを含み、
さらに、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行うことを含む、
情報処理方法。 - コンピュータを、
ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部として機能させ、
前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22863854.0A EP4386687A1 (en) | 2021-09-03 | 2022-02-18 | Information processing device, information processing method, and program |
JP2023545023A JPWO2023032264A1 (ja) | 2021-09-03 | 2022-02-18 | |
CN202280057465.3A CN117859154A (zh) | 2021-09-03 | 2022-02-18 | 信息处理装置、信息处理方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-143758 | 2021-09-03 | ||
JP2021143758 | 2021-09-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023032264A1 true WO2023032264A1 (ja) | 2023-03-09 |
Family
ID=85411759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/006581 WO2023032264A1 (ja) | 2021-09-03 | 2022-02-18 | 情報処理装置、情報処理方法、およびプログラム |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4386687A1 (ja) |
JP (1) | JPWO2023032264A1 (ja) |
CN (1) | CN117859154A (ja) |
WO (1) | WO2023032264A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005216218A (ja) * | 2004-02-02 | 2005-08-11 | Core Colors:Kk | 仮想コミュニティシステム |
JP2009140492A (ja) | 2007-12-06 | 2009-06-25 | Internatl Business Mach Corp <Ibm> | 実世界の物体およびインタラクションを仮想世界内にレンダリングする方法、システムおよびコンピュータ・プログラム |
JP2012511187A (ja) * | 2008-12-08 | 2012-05-17 | ソニー オンライン エンタテインメント エルエルシー | オンラインシミュレーション及びネットワークアプリケーション |
JP2014036874A (ja) * | 2007-10-22 | 2014-02-27 | Avaya Inc | 仮想環境における通信セッションの提示 |
JP2015505249A (ja) * | 2011-05-27 | 2015-02-19 | マイクロソフト コーポレーション | 非プレーヤー・キャラクターを演ずる友人のアバター |
JP2019061434A (ja) * | 2017-09-26 | 2019-04-18 | 株式会社コロプラ | プログラム、情報処理装置、情報処理システム、および情報処理方法 |
-
2022
- 2022-02-18 CN CN202280057465.3A patent/CN117859154A/zh active Pending
- 2022-02-18 JP JP2023545023A patent/JPWO2023032264A1/ja active Pending
- 2022-02-18 EP EP22863854.0A patent/EP4386687A1/en active Pending
- 2022-02-18 WO PCT/JP2022/006581 patent/WO2023032264A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005216218A (ja) * | 2004-02-02 | 2005-08-11 | Core Colors:Kk | 仮想コミュニティシステム |
JP2014036874A (ja) * | 2007-10-22 | 2014-02-27 | Avaya Inc | 仮想環境における通信セッションの提示 |
JP2009140492A (ja) | 2007-12-06 | 2009-06-25 | Internatl Business Mach Corp <Ibm> | 実世界の物体およびインタラクションを仮想世界内にレンダリングする方法、システムおよびコンピュータ・プログラム |
JP2012511187A (ja) * | 2008-12-08 | 2012-05-17 | ソニー オンライン エンタテインメント エルエルシー | オンラインシミュレーション及びネットワークアプリケーション |
JP2015505249A (ja) * | 2011-05-27 | 2015-02-19 | マイクロソフト コーポレーション | 非プレーヤー・キャラクターを演ずる友人のアバター |
JP2019061434A (ja) * | 2017-09-26 | 2019-04-18 | 株式会社コロプラ | プログラム、情報処理装置、情報処理システム、および情報処理方法 |
Also Published As
Publication number | Publication date |
---|---|
EP4386687A1 (en) | 2024-06-19 |
JPWO2023032264A1 (ja) | 2023-03-09 |
CN117859154A (zh) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7109408B2 (ja) | 広範囲同時遠隔ディジタル提示世界 | |
JP7002684B2 (ja) | 拡張現実および仮想現実のためのシステムおよび方法 | |
US11080310B2 (en) | Information processing device, system, information processing method, and program | |
JP6345282B2 (ja) | 拡張現実および仮想現実のためのシステムおよび方法 | |
CN109643161A (zh) | 动态进入和离开由不同hmd用户浏览的虚拟现实环境 | |
WO2014119098A1 (ja) | 情報処理装置、端末装置、情報処理方法及びプログラム | |
WO2014119097A1 (ja) | 情報処理装置、端末装置、情報処理方法及びプログラム | |
WO2023032264A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
JP2023095862A (ja) | プログラム及び情報処理方法 | |
JP7375143B1 (ja) | プログラムおよび情報処理システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22863854 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023545023 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280057465.3 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022863854 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022863854 Country of ref document: EP Effective date: 20240313 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |