WO2023032264A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2023032264A1
WO2023032264A1 PCT/JP2022/006581 JP2022006581W WO2023032264A1 WO 2023032264 A1 WO2023032264 A1 WO 2023032264A1 JP 2022006581 W JP2022006581 W JP 2022006581W WO 2023032264 A1 WO2023032264 A1 WO 2023032264A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
action
avatar
virtual object
information processing
Prior art date
Application number
PCT/JP2022/006581
Other languages
French (fr)
Japanese (ja)
Inventor
隆太郎 峯
巨成 高橋
仕豪 温
真幹 堀川
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Priority to CN202280057465.3A priority Critical patent/CN117859154A/en
Priority to JP2023545023A priority patent/JPWO2023032264A1/ja
Priority to EP22863854.0A priority patent/EP4386687A1/en
Publication of WO2023032264A1 publication Critical patent/WO2023032264A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data

Definitions

  • the present disclosure relates to an information processing device, an information processing method, and a program.
  • Avatars are represented by, for example, two-dimensional or three-dimensional CG (Computer Graphics).
  • Patent Document 1 discloses a technique for reflecting the actions and objects held in the real world by a participant who participates in communication in the avatar of the participant in the virtual space.
  • unnatural phenomena such as the avatar (virtual object) disappearing from the virtual space, or the avatar not moving at all in the virtual space. may be passed on to other users.
  • the present disclosure proposes an information processing device, an information processing method, and a program capable of causing a virtual object associated with a user in a virtual space to behave more naturally even when the user does not operate. .
  • the controller includes a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and the control unit controls the user during a non-operation period of the user.
  • the present invention proposes an information processing apparatus that generates behavior of the virtual object based on sensing data in a real space and performs control to reflect the behavior on the virtual object.
  • the processor includes controlling the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and further, during a period in which the user is not operating, An information processing method is proposed, including generating behavior of the virtual object based on sensing data in real space and performing control to reflect the behavior on the virtual object.
  • a computer is caused to function as a control unit that controls the behavior of a virtual object associated with the user in a virtual space in accordance with a user's operation, and the control unit operates during a period in which the user is not operating (2) proposes a program that generates behavior of the virtual object based on sensing data of the user in the real space and performs control to reflect the behavior on the virtual object.
  • FIG. 1 is a diagram describing an overview of an information processing system according to an embodiment of the present disclosure
  • FIG. FIG. 4 is a diagram illustrating how the user's sensing data in the real space is reflected in the behavior of the user's avatar in the virtual space according to the present embodiment
  • It is a block diagram which shows an example of a structure of the user terminal by this embodiment.
  • It is a block diagram which shows an example of a structure of the management server by this embodiment.
  • 4 is a sequence diagram showing an example of the flow of operation processing according to the embodiment
  • FIG. FIG. 10 is a diagram illustrating an example of expression of an avatar's autonomous action restricted according to the privacy level according to the present embodiment; It is a block diagram explaining generation
  • An information processing system relates to control of a virtual object that is associated with a user in a virtual space and serves as an alter ego of the user.
  • a virtual object that serves as an alter ego of a user is, for example, a humanoid or non-human character represented by two-dimensional or three-dimensional CG, and is also called an avatar.
  • communication in virtual space has become widespread, and not only simple communication such as games and conversations, but also various communication such as live distribution of artists and trading of in-game contents such as 3D models. ing.
  • FIG. 1 is a diagram explaining an overview of an information processing system according to an embodiment of the present disclosure.
  • the information processing system according to the present embodiment includes one or more user terminals 10 (10A, 10B, 10C, . and including.
  • the user terminal 10 is an information processing terminal used by the user.
  • the user terminal 10 transmits information of operation input by the user and sensing data to the virtual space server 20 .
  • the user terminal 10 performs control for displaying the video of the user's viewpoint in the virtual space received from the virtual space server 20 .
  • the user's point of view may be the point of view of the user's avatar in the virtual space, or the point of view of the view including the appearance of the avatar.
  • the user terminal 10 can be realized by a smartphone, a tablet terminal, a PC (personal computer), an HMD (Head Mounted Display) worn on the head, a projector, a television device, a game machine, or the like.
  • the HMD may have a non-transmissive display that covers the entire field of view, or may have a transmissive display.
  • Examples of HMDs having a non-transmissive display unit include glasses-type devices having a so-called AR (Augmented Reality) display function that superimposes and displays a virtual object in real space.
  • the HMD may be a device capable of arbitrarily switching the display unit between a non-transmissive type and a transmissive type.
  • the user can experience virtual space through VR (Virtual Reality).
  • the display unit of the HMD includes a left-eye display and a right-eye display, allowing the user to stereoscopically view an image from the user's viewpoint in the virtual space, thereby providing a more realistic sense of immersion in the virtual space.
  • the virtual space server 20 is an information processing device that generates and controls virtual space, and generates and distributes video from arbitrary viewpoints in virtual space.
  • the virtual space server 20 may be realized by a single device or by a system composed of a plurality of servers.
  • Various 2D or 3D virtual objects are placed in the virtual space.
  • An example of a virtual object is each user's avatar.
  • the virtual space server 20 can control each user's avatar in real time based on information received from each user terminal 10 .
  • Each user can view video from a user's point of view (for example, a user's avatar's point of view) in virtual space using the user terminal 10 and communicate with other users via the avatar.
  • the virtual space server 20 can also control the transmission of the voice received from the user terminal 10 (user's uttered voice) to the other user terminal 10 corresponding to another user avatar near the user avatar. This enables voice conversations between avatars in the virtual space. Conversation between avatars is not limited to voice, and may be conducted in text.
  • the avatar (virtual object) placed in the virtual space is operated by the user in real time. You may end up in a state where you can't move at all. If such a phenomenon that can be said to be unnatural in the real space occurs, there is a risk that other users will feel uncomfortable with the virtual space. In particular, in the case of the Metaverse, which is used as a second living space, it is not desirable for the avatar to suddenly disappear or become completely motionless.
  • the avatar which is a virtual object associated with the user in the virtual space, to behave more naturally even when the user does not operate.
  • the virtual space server 20 avoids an unnatural state by causing the avatar to act autonomously while the user is not operating.
  • uniform behavior control by a simple autopilot is not sufficient for a more natural expression of avatar behavior.
  • avatar behavior associated with the user is generated based on the user's sensing data in the real space, and control is performed to reflect the avatar behavior.
  • control is performed to reflect the avatar behavior.
  • more natural autonomous behavior of the avatar is realized, and the user's behavior in the real space is reflected in the user's avatar, so that the user's own state in the virtual space is compared with the state in the real space. It is possible to reduce the user's sense of incongruity with the autonomous action of the avatar.
  • FIG. 2 is a diagram explaining how the user's sensing data in the real space is reflected in the behavior of the user's avatar 4 in the virtual space according to this embodiment.
  • various sensors sense the user's state while shopping in the real space.
  • the virtual space server 20 generates “shopping behavior” from sensing data and reflects it on the behavior of the avatar 4 .
  • the virtual space server 20 controls the behavior of the avatar 4 to purchase a specified product at a specified store in the virtual space.
  • the product to be purchased may be a product that the user has previously added to the planned purchase/favorite list, or may be appropriately determined based on the user's tastes and preferences, action history in the virtual space, tasks, and the like.
  • a predetermined item may be purchased free of charge as a reward from the service side for the autonomous action.
  • Shopping behaviors that do not involve actual purchases, such as avatars entering and exiting stores in virtual space and window shopping around a number of stores, may also be used.
  • the avatar in the virtual space autonomously performs natural actions, which reduces the discomfort felt by other users.
  • the user's behavior in the real space is reflected in the user's avatar, the user's sense of incongruity with respect to the autonomous behavior of the avatar is reduced.
  • FIG. 3 is a block diagram showing an example of the configuration of the user terminal 10 according to this embodiment.
  • the user terminal 10 has a communication section 110 , a control section 120 , an operation input section 130 , a motion sensor 140 , a positioning section 150 , a display section 160 , an audio output section 170 and a storage section 180 .
  • the user terminal 10 may be implemented by, for example, a wearable device such as a transparent or non-transparent HMD, smart phone, tablet terminal, smart watch, or smart band.
  • the communication unit 110 communicates with the virtual space server 20 by wire or wirelessly to transmit and receive data.
  • the communication unit 110 is, for example, a wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), infrared communication, or a mobile communication network (4G (fourth generation mobile communication system), Communication using 5G (fifth generation mobile communication system) or the like can be performed.
  • control unit 120 functions as an arithmetic processing device and a control device, and controls overall operations within the user terminal 10 according to various programs.
  • the control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like.
  • the control unit 120 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the control unit 120 performs control to display on the display unit 160 the video from the user's viewpoint in the virtual space, which is transmitted (for example, streamed) from the virtual space server 20 .
  • the control unit 120 also controls the reproduction of the audio signal transmitted from the virtual space server 20 together with the video of the user's viewpoint from the audio output unit 170 .
  • the control unit 120 controls transmission of information acquired by the operation input unit 130 , the motion sensor 140 , and the positioning unit 150 from the communication unit 110 to the virtual space server 20 . For example, various operation information is input from the operation input unit 130 and transmitted to the virtual space server 20 as input information for user operations on the virtual space.
  • motion data acquired by the motion sensor 140 can be transmitted to the virtual space server 20 as information for controlling the position and posture (orientation of the face, etc.) of the avatar.
  • a device device used to operate the avatar
  • a device for the user's real space used to generate the autonomous behavior of the avatar during the non-operating period of the avatar.
  • the control unit 120 also functions as a state recognition unit 121.
  • the state recognition unit 121 recognizes the user's state based on the user's sensing data acquired by the motion sensor 140 .
  • the state of the user is, for example, walking, running, standing, sitting, or sleeping.
  • the state of the user recognized by the state recognition unit 121 is transmitted to the virtual space server 20 by the control unit 120, and used in the virtual space server 20 when generating the autonomous action of the avatar.
  • the location information acquired by the location positioning unit 150 is also used in the virtual space server 20 when generating autonomous actions of the avatar.
  • the control unit 120 may transmit the location information to the virtual space server 20 together with the state of the user, or may transmit the location information when a change (movement) in the location is detected.
  • control unit 120 may transmit the user's state and location information to the virtual space server 20 during a non-operation period in which the user does not operate the avatar in the virtual space. Also, the control unit 120 may specify the name of the place where the user is by combining the positional information and the map information, and transmit the name to the virtual space server 20 .
  • the place name may be a general name. For example, if it is specified that the user is in "XX park in XX city", the user may simply be notified of "park”. This may protect the user's privacy. Map information may be stored in the storage unit 180 in advance.
  • the map information is not limited to outdoor map information, and includes indoor map information such as inside a school, inside a company, inside a department store, inside one's home, and the like.
  • Control unit 120 can also identify which room the user is in, such as a bedroom or a living room, from the position information.
  • control unit 120 is obtained by an external sensor (for example, a camera installed around the user, a motion sensor attached to the user separately from the user terminal 10, etc.) obtained from the communication unit 110
  • the user's sensing data may be used for recognizing the user's state and specifying the location, or may be transmitted to the virtual space server 20 as it is.
  • control unit 120 may transmit the operation information received from the controller held by the user to the virtual space server 20 .
  • Operation input unit 130 receives an operation instruction from the user and outputs the operation content to control unit 120 .
  • the operation input unit 130 may be, for example, a touch sensor, a pressure sensor, or a proximity sensor.
  • the operation input unit 130 may be a physical configuration such as buttons, switches, and levers.
  • the motion sensor 140 has a function of sensing the motion of the user. More specifically, motion sensor 140 may have an acceleration sensor, an angular velocity sensor, and a geomagnetic sensor. Furthermore, the motion sensor 140 may be a sensor capable of detecting a total of 9 axes, including a 3-axis gyro sensor, a 3-axis acceleration sensor, and a 3-axis geomagnetic sensor.
  • the motion of the user includes motion of the user's body and motion of the head. More specifically, the motion sensor 140 senses the movement of the user terminal 10 worn by the user as the movement of the user. For example, when the user terminal 10 is configured by an HMD and worn on the head, the motion sensor 140 can sense the movement of the user's head.
  • the motion sensor 140 can sense the movement of the user's body.
  • the motion sensor 140 may be a wearable device configured separately from the user terminal 10 and worn by the user.
  • the positioning unit 150 has a function of acquiring the current position of the user. In this embodiment, it is assumed that the user possesses the user terminal 10, and the position of the user terminal 10 is regarded as the current position of the user.
  • the positioning unit 150 calculates the absolute or relative position of the user terminal 10 .
  • the position positioning unit 150 may position the current position based on an acquired signal from the outside.
  • a GNSS Global Navigation Satellite System
  • GNSS Global Navigation Satellite System
  • a method of detecting a position by transmitting/receiving with Wi-Fi (registered trademark), Bluetooth (registered trademark), a mobile phone/PHS/smartphone, or the like, short-distance communication, or the like may be used.
  • the positioning unit 150 may estimate information indicating relative changes based on the detection results of an acceleration sensor, an angular velocity sensor, or the like.
  • the positioning unit 150 can perform outdoor positioning and indoor positioning using the various methods described above.
  • the position may include an altitude.
  • Position positioning unit 150 may include an altimeter.
  • the display unit 160 has a function of displaying a video (image) of the user's viewpoint in the virtual space.
  • the display unit 160 may be a display panel such as a liquid crystal display (LCD) or an organic EL (Electro Luminescence) display.
  • Audio output section 170 outputs an audio signal under the control of control section 120 .
  • the audio output unit 170 may be configured as headphones, earphones, or bone conduction speakers, for example.
  • the storage unit 180 is implemented by a ROM (Read Only Memory) that stores programs, calculation parameters, and the like used in the processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
  • the storage unit 180 according to this embodiment may store, for example, an algorithm for state recognition.
  • the configuration of the user terminal 10 has been specifically described above, the configuration of the user terminal 10 according to the present disclosure is not limited to the example shown in FIG.
  • the user terminal 10 may be realized by multiple devices.
  • the motion sensor 140 or the positioning unit 150 and the control unit 120 may be configured separately.
  • the user terminal 10 may further have various sensors.
  • the user terminal 10 has a camera, a microphone, a biosensor (detection unit for pulse, heart rate, perspiration, blood pressure, body temperature, respiration, myoelectric value, electroencephalogram, etc.), a gaze detection sensor, a distance measurement sensor, etc., and obtains The information obtained may be transmitted to the virtual space server 20 .
  • the state recognition unit 121 may recognize the user's state (running, walking, sleeping, etc.) in consideration of not only motion data but also biometric data acquired by a biosensor, for example.
  • the control unit 120 may analyze the captured image around the user acquired by the camera, and specify the user's position (the name of the place where the user is).
  • FIG. 4 is a block diagram showing an example of the configuration of the virtual space server 20 according to this embodiment. As shown in FIG. 4, the virtual space server 20 has a communication section 210, a control section 220, and a storage section 230. FIG.
  • the communication unit 210 transmits and receives data to and from an external device by wire or wirelessly.
  • the communication unit 210 is, for example, wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), mobile communication network (LTE (Long Term Evolution), 4G (4th generation mobile communication method), 5G (fifth generation mobile communication method)), etc., to connect to the user terminal 10 for communication.
  • control unit 220 functions as an arithmetic processing device and a control device, and controls overall operations within the virtual space server 20 according to various programs.
  • the control unit 220 is implemented by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like.
  • the control unit 220 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the control unit 220 also functions as an avatar behavior generation unit 221 and an avatar control unit 222.
  • the avatar behavior generation unit 221 has a function of generating the avatar's autonomous behavior based on the user's sensing data in the real space during the non-operation period.
  • the avatar control unit 222 has a function of controlling the user's avatar according to the autonomous behavior generated by the avatar behavior generation unit 221 .
  • the user's sensing data is, for example, at least one of the user's state and location information. Sensing data may be transmitted from the user terminal 10 .
  • the information detected by the motion sensor 140 or the positioning unit 150 may be transmitted directly from the user terminal 10, or the recognition result recognized based on the information may be transmitted.
  • a user's state can be recognized from the motion data, as described above. Such recognition may be performed by the user terminal 10 or by the control unit 220 of the virtual space server 20 .
  • the location information may be the name of the place.
  • by reflecting the user's state and position information in the real space in the avatar's autonomous action it is possible to reduce the user's sense of incongruity with respect to the avatar's autonomous action during the non-operation period.
  • a user's sense of incongruity may occur in a familiar virtual space called the Metaverse when, for example, the user's avatar behaves arbitrarily and has nothing to do with his or her actual behavior. It is a sense of incongruity.
  • the avatar action generation unit 221 may refer to a database of avatar actions based on the user state and position information obtained by sensing the user, for example, to generate autonomous actions of the avatar.
  • An example of an avatar behavior database is shown in FIG.
  • a database is used in which avatar actions are associated in advance with user states and positions. It can be said that states and positions (locations) are factors that constitute avatar behavior.
  • the action “eating” is composed of the state of "sitting” and the position (place) factors of "living room at home” and "restaurant”.
  • the factors that make up each avatar action are desirably factors in the user action that match the avatar action. For example, as shown in FIG.
  • factors of the avatar behavior “sleeping” include “state: sleep” and “position: bedroom” that constitute the matching user behavior “sleeping”.
  • the avatar action defined here is an action that the avatar can perform in the virtual space.
  • the correspondence relationship between the avatar behavior shown in FIG. 5 and each factor such as state and position is an example, and the present embodiment is not limited to this.
  • the avatar action generation unit 221 matches the user state and position information acquired from the user terminal 10 with the avatar action database, and calculates the matching rate. For example, if the information obtained from the user terminal 10 is "state: walking" and "position: shop", the matching rate for "shopping” among the avatar actions listed in the database of FIG. The precision rate of "defeating” is calculated as 50%. This is to satisfy the condition factor of the condition factor and position factor of "defeat the enemy". In this case, the avatar action generation unit 221 may determine the avatar action (here, "shopping") that completely corresponds (with a matching rate of 100%) as the autonomous action of the avatar.
  • the avatar action generation unit 221 may determine the avatar action (here, "shopping") that completely corresponds (with a matching rate of 100%) as the autonomous action of the avatar.
  • the avatar action generation unit 221 may stochastically determine the avatar action candidate including at least one of the corresponding state factor and position factor. At this time, the avatar action generation unit 221 increases the selection probability of candidates that include more suitable factors, thereby determining natural autonomous actions that are less uncomfortable with respect to user actions in the real space. can be done. Note that even when there is a completely corresponding avatar action (with a matching rate of 100%), the avatar action generation unit 221 generates a probabilistic You may make it decide to.
  • the avatar action generation unit 221 selects probabilistically from a plurality of avatar action candidates that completely correspond (with a matching rate of 100%).
  • the categories of states and categories of positions shown in FIG. 5 are examples, and the present embodiment is not limited to these.
  • the avatar behavior database according to this embodiment can be reused in multiple virtual spaces for different services. That is, a database such as that shown in FIG. 5 may be used in generating an autonomous action of an avatar during a period of non-user operation in another virtual space. Information in the database can be shared by cooperation between virtual spaces. In addition, if there is an inappropriate autonomous action or an insufficient autonomous action in each virtual space, the avatar action defined in each virtual space can be corrected, changed, or added as appropriate to create an appropriate behavior for each virtual space. It is possible to create a database. For example, an avatar action of "defeating an enemy" may be changed to "cultivating a field" without changing the constituent factors. It is also possible for the user's avatar to move to another virtual space by cooperation between the virtual spaces.
  • the user terminal 10 may transmit user sensing data to multiple virtual space servers 20 . This makes it possible to refer to databases in each of a plurality of virtual spaces and reflect the user behavior in the real space on the autonomous behavior of the user's avatar.
  • the method is not particularly limited.
  • the detected information may be applied to the avatar in the virtual space as it is.
  • the "user's sensing data" to be reflected in the avatar is not limited to the state or position.
  • the "state” is not limited to states recognized based on motion data. For example, the state may be recognized based on the user's uttered voice (conversation) picked up by the microphone of the user terminal 10 or biological information (heart rate, blood pressure, body temperature, etc.).
  • the storage unit 230 is implemented by a ROM (Read Only Memory) that stores programs and calculation parameters used in the processing of the control unit 220, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate. According to this embodiment, the storage unit 230 stores information on the virtual space.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the configuration of the virtual space server 20 has been specifically described above, the configuration of the virtual space server 20 according to the present disclosure is not limited to the example shown in FIG.
  • the virtual space server 20 may be realized by multiple devices.
  • FIG. 6 is a sequence diagram showing an example of the flow of operation processing according to this embodiment. Note that the processing shown in FIG. 6 is performed during a non-operation period in which the user does not operate the avatar (for example, when the user logs out or closes the screen displaying the image of the virtual space, and the operation is not performed for a certain period of time). case, etc.).
  • the user terminal 10 first acquires the movement and position of the user from each sensor (step S103). Specifically, the motion sensor 140 acquires the motion of the user, and the position measurement unit 150 acquires position information.
  • the state recognition unit 121 of the user terminal 10 recognizes the user's state based on the user's movement (motion data) (step S106).
  • the user terminal 10 transmits the location information (which may be the general name of the location) and the recognition result of the state (step S109).
  • the virtual space server 20 generates the user's avatar behavior based on the location information and the status recognition result received from the user terminal 10 (step S121).
  • the avatar behavior generator 221 of the virtual space server 20 may generate avatar behavior based on at least one of the position information and the state recognition result.
  • the avatar control unit 222 of the virtual space server 20 applies the avatar behavior generated (selected) by the avatar behavior generation unit 221 to the user's avatar and controls it (step S124).
  • This allows the user's avatar to act autonomously even during a period in which the user is not operating, thereby reducing unnaturalness.
  • by reflecting the behavior of the user in the real space in the autonomous behavior of the avatar it is possible to reduce the sense of incongruity and resistance of the user of his/her own avatar.
  • the movement and position of the user are acquired by the user terminal 10 (or the wearable device worn by the user), thereby alleviating restrictions on the measurement range.
  • the virtual space server 20 presets a privacy level for each action defined as, for example, an avatar's autonomous action, and shows (displays) the action up to a permitted level according to the familiarity between the user and other users.
  • Table 1 below is an example of the privacy level set for the autonomous action of the avatar.
  • a higher privacy level is set for "shopping" with high privacy, such as going out.
  • the privacy level may be arbitrarily set by the user.
  • the user determines to what level the avatar's behavioral expression is permitted (whether to be shown) to other users in the virtual space. Such permission may be individually set for each other user, or may be set for each group by grouping other users in advance. For example, users with close relationships are allowed up to the highest privacy level (e.g., level 3), and other users with no close relationships are allowed up to the lowest privacy level (e.g., level 0). You may allow it.
  • the avatar action generation unit 221 can select the general-purpose action.
  • a general-purpose action is, for example, an action that is randomly selected from autonomous action candidates defined in an avatar action database. Alternatively, the action may be randomly selected from a large number of autonomous action candidates prepared as general-purpose actions.
  • FIG. 7 is a diagram explaining an example of expression of autonomous actions of avatars that are restricted according to the privacy level according to this embodiment.
  • user B avatar 4b
  • user C avatar 4c
  • the virtual space server 20 refers to the database shown in FIG. 5 based on the data sensed from the user in the real space (state: walking, place: shop), and performs the autonomous action of "shopping" for the user's avatar 4a. to decide.
  • Table 1 since the privacy level of "shopping" is "level 3", for user B who is permitted up to privacy level 3, the avatar 4a is "shopping".
  • the virtual space server 20 determines how the autonomous behavior of the avatar 4a is displayed when generating the video of each user's viewpoint to be transmitted to the user terminals of the user B and the user C. control accordingly.
  • the above general-purpose action may be a selection method based on a learning base using the action history of each avatar in the virtual space, in addition to the method of selecting at random.
  • FIG. 8 is a configuration diagram illustrating generation of a general-purpose action according to a modified example of this embodiment.
  • the avatar action history DB 182 shown in FIG. 8 is a database that accumulates the action history (including time axis information) of all avatars in the virtual space.
  • the information accumulated in the avatar action history DB 182 may be, for example, avatar autonomous actions that reflect the user's actions in real space.
  • the avatar action generation unit 221 When generating a general-purpose action, the avatar action generation unit 221 refers to the current time axis information and the avatar action history DB 182, and acquires information on the percentage of autonomous actions of each avatar at the corresponding time. Then, the avatar action generation unit 221 determines (probabilistically selects based on the ratio information) the action with a higher percentage as the general-purpose action. As a result, the avatars can be made to behave in the same manner as most avatars do, and the user's privacy can be protected in a more natural and unobtrusive manner.
  • autonomous behavior was determined stochastically by calculating the matching rate between the user's sensing data (state, position) in the real space and each avatar behavior candidate in the database.
  • the threshold of the relevance rate when calculating the relevance rate or adding noise when calculating the relevance rate, it is possible to select an appropriate autonomous action while protecting privacy. It is also possible to adjust the strength of privacy protection by adjusting the threshold value and adjusting the intensity of noise.
  • the virtual space server 20 may set a reward for the avatar's autonomous action. For example, in the case of shopping actions, items that can be used in the virtual space are acquired, in the case of work or defeating enemies, experience points and currency are acquired in the virtual space, and in the case of actions at home , includes rewards such as recovery of physical strength used in virtual space.
  • the avatar moves in the virtual space by autonomous action, the movement information and the image of the avatar's viewpoint are recorded, and when the user resumes the operation, the image etc. can be confirmed. may
  • Such rewards can promote an increase in the number of users who use autonomous behavior control.
  • the avatar action generation unit 221 of the virtual space server 20 can control the expression of the avatar's autonomous action according to the time zone of the viewing user, and reduce the unnaturalness caused by the different time zones.
  • the virtual space server 20 prepares an avatar action history DB (including time axis information) for each time zone.
  • an avatar action history DB including time axis information
  • a matching action for example, "sleep" is extracted from the avatar action history DB.
  • the virtual space server 20 may reflect the user's information in the real space on the appearance of the avatar.
  • each candidate for the avatar action as described with reference to FIG. 5 may be associated with the appearance of the avatar.
  • the avatar control unit 222 of the virtual space server 20 can appropriately change the appearance of the avatar when controlling the autonomous action of the avatar. Since each action corresponds to the appearance, the appearance can be changed in the same way when the action is generated in the case of privacy protection as described above (generation of general-purpose action).
  • each user terminal 10 may generate a virtual space and generate and display an image of the user's viewpoint in the virtual space.
  • Information for generating the virtual space is obtained in advance from the virtual space server 20 .
  • each user terminal 10 transmits the information of the operation input by the user, sensing data, etc. to the virtual space server 20 in real time.
  • the virtual space server 20 controls the transmission of the information regarding the movement of the user avatar received from the user terminal 10 to other user terminals 10 .
  • the virtual space server 20 also transmits avatar autonomous control information as needed.
  • a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation, The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
  • Information processing equipment (2) The information processing apparatus according to (1), wherein the sensing data includes information on at least one of a user's state and position.
  • the control unit generates the behavior of the virtual object by referring to a database that associates candidates for the behavior of the virtual object with at least one of one or more states or positions, and (1) or (2) above. ).
  • the control unit calculates a matching rate between each action candidate defined in the database and the sensing data, and selects one action from the action candidates based on the matching rate.
  • the control unit selects a general-purpose action as the action of the avatar.
  • the information processing device according to (5), which performs control to generate an action.
  • control unit randomly selects the general-purpose action from the candidates for each action.
  • control unit generates the general-purpose action based on the action history of each avatar in the virtual space.
  • control unit performs control to change the appearance of the virtual object to the appearance associated with the action to be generated when reflecting on the virtual object.
  • the information processing device according to the item.
  • control unit generates an image of the user's viewpoint in the virtual space and controls transmission to the user terminal.
  • the information processing device further comprises a communication unit, The information processing device according to any one of (1) to (10), wherein the communication unit receives the sensing data from a user terminal.
  • the processor Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation; Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
  • Information processing methods Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation; Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
  • the computer Functioning as a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
  • the control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object. program.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

[Problem] To provide an information processing device, an information processing method, and a program which are capable of making a virtual object which is associated with a user in a virtual space act more naturally even when not operated by the user. [Solution] An information processing device comprising a control unit that controls the behavior of a virtual object which is associated with a user in a virtual space, in accordance with operations by the user, wherein during a period of no operation by the user, the control unit generates behavior of the virtual object on the basis of sensing data pertaining to the user in real space and reflects the behavior in the virtual object.

Description

情報処理装置、情報処理方法、およびプログラムInformation processing device, information processing method, and program
 本開示は、情報処理装置、情報処理方法、およびプログラムに関する。 The present disclosure relates to an information processing device, an information processing method, and a program.
 近年、インターネットを介して繋がる仮想的な世界である仮想空間の利用が普及している。ユーザは、仮想空間において自身の分身となるキャラクター(アバターとも称される)を利用し、世界中のユーザとネットワークを介したコミュニケーションを取ることが可能である。アバターは、例えば2次元または3次元のCG(Computer Graphics)により表現される。 In recent years, the use of virtual spaces, which are virtual worlds connected via the Internet, has become widespread. A user can use a character (also called an avatar) that acts as an alter ego of the user in a virtual space and communicate with users all over the world via a network. Avatars are represented by, for example, two-dimensional or three-dimensional CG (Computer Graphics).
 仮想空間の利用に関し、例えば下記特許文献1では、コミュニケーションに参加する参加者の現実世界での動作や手に持つ物体を、仮想空間の当該参加者のアバターに反映させる技術が開示されている。 Regarding the use of virtual space, Patent Document 1 below, for example, discloses a technique for reflecting the actions and objects held in the real world by a participant who participates in communication in the avatar of the participant in the virtual space.
特開2009-140492号公報JP 2009-140492 A
 しかしながら、ユーザが操作をしない非操作の期間中は、アバター(仮想オブジェクト)が仮想空間から消えたり、仮想空間でアバターが全く動かなくなったり等の不自然な現象が発生し、仮想空間への違和感を他のユーザへ与えてしまう恐れがある。 However, during the non-operation period in which the user does not operate, unnatural phenomena occur, such as the avatar (virtual object) disappearing from the virtual space, or the avatar not moving at all in the virtual space. may be passed on to other users.
 そこで、本開示では、ユーザが非操作の場合にも、仮想空間においてユーザに対応付けられる仮想オブジェクトに、より自然な行動をさせることが可能な情報処理装置、情報処理方法、およびプログラムを提案する。 Therefore, the present disclosure proposes an information processing device, an information processing method, and a program capable of causing a virtual object associated with a user in a virtual space to behave more naturally even when the user does not operate. .
 本開示によれば、ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部を備え、前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、情報処理装置を提案する。 According to the present disclosure, the controller includes a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and the control unit controls the user during a non-operation period of the user. The present invention proposes an information processing apparatus that generates behavior of the virtual object based on sensing data in a real space and performs control to reflect the behavior on the virtual object.
 本開示によれば、プロセッサが、ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御することを含み、さらに、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行うことを含む、情報処理方法を提案する。 According to the present disclosure, the processor includes controlling the behavior of the virtual object associated with the user in the virtual space according to the user's operation, and further, during a period in which the user is not operating, An information processing method is proposed, including generating behavior of the virtual object based on sensing data in real space and performing control to reflect the behavior on the virtual object.
 本開示によれば、コンピュータを、ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部として機能させ、前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、プログラムを提案する。 According to the present disclosure, a computer is caused to function as a control unit that controls the behavior of a virtual object associated with the user in a virtual space in accordance with a user's operation, and the control unit operates during a period in which the user is not operating (2) proposes a program that generates behavior of the virtual object based on sensing data of the user in the real space and performs control to reflect the behavior on the virtual object.
本開示の一実施形態による情報処理システムの概要について説明する図である。1 is a diagram describing an overview of an information processing system according to an embodiment of the present disclosure; FIG. 本実施形態による実空間でのユーザのセンシングデータを仮想空間におけるユーザのアバターの行動に反映させることを説明する図である。FIG. 4 is a diagram illustrating how the user's sensing data in the real space is reflected in the behavior of the user's avatar in the virtual space according to the present embodiment; 本実施形態によるユーザ端末の構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the user terminal by this embodiment. 本実施形態による管理サーバの構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the management server by this embodiment. 本実施形態によるアバター行動のデータベースの一例を示す図である。It is a figure which shows an example of the database of the avatar action by this embodiment. 本実施形態による動作処理の流れの一例を示すシーケンス図である。4 is a sequence diagram showing an example of the flow of operation processing according to the embodiment; FIG. 本実施形態によるプライバシーレベルに応じて制限されるアバターの自律行動の表現例について説明する図である。FIG. 10 is a diagram illustrating an example of expression of an avatar's autonomous action restricted according to the privacy level according to the present embodiment; 本実施形態の変形例による汎用行動の生成について説明する構成図である。It is a block diagram explaining generation|occurrence|production of general-purpose action by the modification of this embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In the present specification and drawings, constituent elements having substantially the same functional configuration are denoted by the same reference numerals, thereby omitting redundant description.
 また、説明は以下の順序で行うものとする。
 1.本開示の一実施形態による情報処理システムの概要
 2.構成例
  2-1.ユーザ端末10の構成例
  2-2.仮想空間サーバ20の構成例
 3.動作処理
 4.変形例
 5.補足
Also, the description shall be given in the following order.
1. Overview of an information processing system according to an embodiment of the present disclosure2. Configuration example 2-1. Configuration example of user terminal 10 2-2. Configuration example of virtual space server 20 3 . Operation processing 4. Modification 5. supplement
 <<1.本開示の一実施形態による情報処理システムの概要>>
 本開示の一実施形態による情報処理システムは、仮想空間でユーザに対応付けられ、ユーザの分身となる仮想オブジェクトの制御に関する。ユーザの分身となる仮想オブジェクトは、例えば2次元または3次元のCGにより表現される人型または人以外のキャラクターであって、所謂アバターとも称される。近年、仮想空間におけるコミュニケーションが普及してきており、ゲームや会話といった単純なコミュニケーションに留まらず、アーティストのライブ配信や、3Dモデルなどのゲーム内のコンテンツの取引といったビジネス用途等の様々なコミュニケーションが行われている。また、今までは実世界で行われてきた展示会等の様々なイベントが、現地に足を運ぶことなく、仮想空間でアバターを利用して開催される潮流もあり、実空間に次ぐ第二の生活空間として注目されている。このような実空間を仮想化したインターネット上の仮想世界は、通称、メタバースとも称される。
<<1. Outline of information processing system according to an embodiment of the present disclosure>>
An information processing system according to an embodiment of the present disclosure relates to control of a virtual object that is associated with a user in a virtual space and serves as an alter ego of the user. A virtual object that serves as an alter ego of a user is, for example, a humanoid or non-human character represented by two-dimensional or three-dimensional CG, and is also called an avatar. In recent years, communication in virtual space has become widespread, and not only simple communication such as games and conversations, but also various communication such as live distribution of artists and trading of in-game contents such as 3D models. ing. In addition, there is a trend to hold various events such as exhibitions that have been held in the real world using avatars in virtual space without visiting the actual site. It is attracting attention as a living space for A virtual world on the Internet, which is a virtualized real space, is commonly called a Metaverse.
 図1は、本開示の一実施形態による情報処理システムの概要について説明する図である。図1に示すように、本実施形態による情報処理システムは、1以上のユーザ端末10(10A、10B、10C・・・)と、仮想空間を管理する仮想空間サーバ20(情報処理装置の一例)と、を含む。 FIG. 1 is a diagram explaining an overview of an information processing system according to an embodiment of the present disclosure. As shown in FIG. 1, the information processing system according to the present embodiment includes one or more user terminals 10 (10A, 10B, 10C, . and including.
 ユーザ端末10は、ユーザに利用される情報処理端末である。ユーザ端末10は、ユーザによる操作入力の情報やセンシングデータを仮想空間サーバ20に送信する。また、ユーザ端末10は、仮想空間サーバ20から受信した、仮想空間におけるユーザ視点の映像を表示する制御を行う。ユーザ視点とは、仮想空間におけるユーザのアバターの視点であってもよいし、当該アバターの姿を含む視界の視点であってもよい。 The user terminal 10 is an information processing terminal used by the user. The user terminal 10 transmits information of operation input by the user and sensing data to the virtual space server 20 . Also, the user terminal 10 performs control for displaying the video of the user's viewpoint in the virtual space received from the virtual space server 20 . The user's point of view may be the point of view of the user's avatar in the virtual space, or the point of view of the view including the appearance of the avatar.
 また、ユーザ端末10は、スマートフォン、タブレット端末、PC(パーソナルコンピュータ)、頭部に装着されるHMD(Head Mounted Display)、プロジェクター、テレビ装置、ゲーム機等により実現され得る。HMDは、視界全体を覆う非透過型の表示部を有してもよいし、透過型の表示部を有してもよい。非透過型の表示部を有するHMDとしては、実空間に仮想オブジェクトを重畳表示する所謂AR(Augmented Reality)表示機能を有するメガネ型デバイス等が挙げられる。また、HMDは、表示部を非透過型および透過型に任意に切り替え可能な装置であってもよい。 Also, the user terminal 10 can be realized by a smartphone, a tablet terminal, a PC (personal computer), an HMD (Head Mounted Display) worn on the head, a projector, a television device, a game machine, or the like. The HMD may have a non-transmissive display that covers the entire field of view, or may have a transmissive display. Examples of HMDs having a non-transmissive display unit include glasses-type devices having a so-called AR (Augmented Reality) display function that superimposes and displays a virtual object in real space. Also, the HMD may be a device capable of arbitrarily switching the display unit between a non-transmissive type and a transmissive type.
 例えば、ユーザ端末10として視界全体を覆う非透過型のHMDを用い、ユーザがVR(Virtual Reality)により仮想空間を体験することも可能である。HMDの表示部は、左目用ディスプレイおよび右目用ディスプレイを含み、ユーザは仮想空間におけるユーザ視点の映像を立体視することが可能であり、仮想空間への没入感をよりリアルに得られる。 For example, by using a non-transmissive HMD that covers the entire field of view as the user terminal 10, the user can experience virtual space through VR (Virtual Reality). The display unit of the HMD includes a left-eye display and a right-eye display, allowing the user to stereoscopically view an image from the user's viewpoint in the virtual space, thereby providing a more realistic sense of immersion in the virtual space.
 仮想空間サーバ20は、仮想空間の生成や制御、仮想空間における任意の視点からの映像の生成および配信等を行う情報処理装置である。仮想空間サーバ20は、単一の装置により実現されてもよいし、複数のサーバから成るシステムにより実現されてもよい。仮想空間には、様々な2Dまたは3Dの仮想オブジェクトが配置される。仮想オブジェクトの一例として、各ユーザのアバターが挙げられる。仮想空間サーバ20は、各ユーザ端末10から受信した情報に基づいて、各ユーザのアバターをリアルタイムで制御し得る。各ユーザは、ユーザ端末10により仮想空間におけるユーザ視点(例えばユーザアバターの視点)の映像を視聴し、アバターを介して他ユーザとコミュニケーションを取ることが可能である。また、仮想空間サーバ20は、ユーザ端末10から受信した音声(ユーザの発話音声)を、ユーザアバター付近に居る他ユーザアバターに対応する相手側のユーザ端末10に送信する制御も行い得る。これにより、仮想空間においてアバター同士で音声会話も可能となる。アバター同士の会話は音声に限定されず、テキストにより行われてもよい。 The virtual space server 20 is an information processing device that generates and controls virtual space, and generates and distributes video from arbitrary viewpoints in virtual space. The virtual space server 20 may be realized by a single device or by a system composed of a plurality of servers. Various 2D or 3D virtual objects are placed in the virtual space. An example of a virtual object is each user's avatar. The virtual space server 20 can control each user's avatar in real time based on information received from each user terminal 10 . Each user can view video from a user's point of view (for example, a user's avatar's point of view) in virtual space using the user terminal 10 and communicate with other users via the avatar. The virtual space server 20 can also control the transmission of the voice received from the user terminal 10 (user's uttered voice) to the other user terminal 10 corresponding to another user avatar near the user avatar. This enables voice conversations between avatars in the virtual space. Conversation between avatars is not limited to voice, and may be conducted in text.
 (課題の整理)
 仮想空間に配置されるアバター(仮想オブジェクト)はユーザによりリアルタイムで操作されるが、ユーザがログアウトした場合や操作しなくなった場合等、アバターが制御されなくなると、アバターが仮想空間から突然消えたり、全く動かない状態に陥ってしまったりする。このような、実空間であれば不自然と言える現象が発生すると、仮想空間への違和感を他のユーザへ与えてしまう恐れがある。特に第二の生活空間として利用されるメタバースの場合、アバターが突然消失したり、全く動かなくなるような不自然な状態は好ましくない。
(Organization of issues)
The avatar (virtual object) placed in the virtual space is operated by the user in real time. You may end up in a state where you can't move at all. If such a phenomenon that can be said to be unnatural in the real space occurs, there is a risk that other users will feel uncomfortable with the virtual space. In particular, in the case of the Metaverse, which is used as a second living space, it is not desirable for the avatar to suddenly disappear or become completely motionless.
 そこで、本開示による情報処理システムでは、ユーザが非操作の場合にも、仮想空間においてユーザに対応付けられる仮想オブジェクトであるアバターに、より自然な行動をさせることを可能とする。 Therefore, in the information processing system according to the present disclosure, it is possible to cause the avatar, which is a virtual object associated with the user in the virtual space, to behave more naturally even when the user does not operate.
 すなわち、仮想空間サーバ20は、ユーザが非操作の期間中はアバターに自律行動をさせることで、不自然な状態を回避する。ただし、単純なオートパイロットで一律に行動制御するだけでは、アバターのより自然な行動表現として不十分である。これに対し、本実施形態では、非操作の期間中に、実空間におけるユーザのセンシングデータに基づいてユーザに対応付けられるアバターの行動を生成し、アバターの行動に反映させる制御を行う。これにより、アバターのより自然な自律行動を実現し、かつ、実空間におけるユーザの行動がユーザのアバターに反映されることで、実空間での自身の状態と比較して仮想空間での自身のアバターの自律行動へのユーザの違和感を低減することができる。 That is, the virtual space server 20 avoids an unnatural state by causing the avatar to act autonomously while the user is not operating. However, uniform behavior control by a simple autopilot is not sufficient for a more natural expression of avatar behavior. On the other hand, in the present embodiment, during a non-operation period, avatar behavior associated with the user is generated based on the user's sensing data in the real space, and control is performed to reflect the avatar behavior. As a result, more natural autonomous behavior of the avatar is realized, and the user's behavior in the real space is reflected in the user's avatar, so that the user's own state in the virtual space is compared with the state in the real space. It is possible to reduce the user's sense of incongruity with the autonomous action of the avatar.
 図2は、本実施形態による実空間でのユーザのセンシングデータを仮想空間におけるユーザのアバター4の行動に反映させることを説明する図である。図2に示すように、例えば実空間で買い物をしている際のユーザの状態が各種センサによりセンシングされ、センシングデータがユーザ端末10から仮想空間サーバ20に送信され、仮想空間のアバター4の行動に反映される。例えば仮想空間サーバ20は、センシングデータから「買い物行動」を生成し、アバター4の行動に反映させる。より具体的には、仮想空間サーバ20は、仮想空間の所定の店で所定の商品を購入するよう、アバター4の行動を制御する。購入する商品は、ユーザが購入予定/お気に入りリストに予め入れていた商品であってもよいし、ユーザの趣味嗜好、仮想空間での行動履歴、タスク等に基づいて適宜決定されてもよい。若しくは、後述するように、自律行動に対するサービス側からの報酬として、無償で所定のアイテムを購入できるようにしてもよい。また、アバターが仮想空間の店舗に出入りしたり、多数の店舗をウィンドウショッピングして回ったり等、実際の購入は行わない買い物行動であってもよい。 FIG. 2 is a diagram explaining how the user's sensing data in the real space is reflected in the behavior of the user's avatar 4 in the virtual space according to this embodiment. As shown in FIG. 2, various sensors sense the user's state while shopping in the real space. reflected in For example, the virtual space server 20 generates “shopping behavior” from sensing data and reflects it on the behavior of the avatar 4 . More specifically, the virtual space server 20 controls the behavior of the avatar 4 to purchase a specified product at a specified store in the virtual space. The product to be purchased may be a product that the user has previously added to the planned purchase/favorite list, or may be appropriately determined based on the user's tastes and preferences, action history in the virtual space, tasks, and the like. Alternatively, as will be described later, a predetermined item may be purchased free of charge as a reward from the service side for the autonomous action. Shopping behaviors that do not involve actual purchases, such as avatars entering and exiting stores in virtual space and window shopping around a number of stores, may also be used.
 このように、ユーザが非操作の期間中であっても、仮想空間のアバターが自然な行動を自律的に実行する、他ユーザからの違和感が低減される。また、ユーザとしても、自身の実空間での行動がユーザのアバターに反映されることで、アバターの自律行動に対する違和感が低減する。 In this way, even while the user is not operating, the avatar in the virtual space autonomously performs natural actions, which reduces the discomfort felt by other users. In addition, since the user's behavior in the real space is reflected in the user's avatar, the user's sense of incongruity with respect to the autonomous behavior of the avatar is reduced.
 以上、本開示の一実施形態による情報処理システムの概要について説明した。続いて、本実施形態による情報処理システムに含まれる各装置の構成について図面を参照して説明する。 The outline of the information processing system according to one embodiment of the present disclosure has been described above. Next, the configuration of each device included in the information processing system according to this embodiment will be described with reference to the drawings.
 <<2.構成例>>
 <2-1.ユーザ端末10の構成例>
 図3は、本実施形態によるユーザ端末10の構成の一例を示すブロック図である。図3に示すように、ユーザ端末10は、通信部110、制御部120、操作入力部130、モーションセンサ140、位置測位部150、表示部160、音声出力部170、記憶部180を有する。ユーザ端末10は、例えば透過型または非透過型のHMD、スマートフォン、タブレット端末、スマートウォッチやスマートバンド等のウェアラブルデバイスにより実現されてもよい。
<<2. Configuration example >>
<2-1. Configuration example of user terminal 10>
FIG. 3 is a block diagram showing an example of the configuration of the user terminal 10 according to this embodiment. As shown in FIG. 3 , the user terminal 10 has a communication section 110 , a control section 120 , an operation input section 130 , a motion sensor 140 , a positioning section 150 , a display section 160 , an audio output section 170 and a storage section 180 . The user terminal 10 may be implemented by, for example, a wearable device such as a transparent or non-transparent HMD, smart phone, tablet terminal, smart watch, or smart band.
 (通信部110)
 通信部110は、有線または無線により、仮想空間サーバ20と通信接続してデータの送受信を行う。通信部110は、例えば有線/無線LAN(Local Area Network)、Wi-Fi(登録商標)、Bluetooth(登録商標)、赤外線通信、または携帯通信網(4G(第4世代の移動体通信方式)、5G(第5世代の移動体通信方式))等を用いた通信を行い得る。
(Communication unit 110)
The communication unit 110 communicates with the virtual space server 20 by wire or wirelessly to transmit and receive data. The communication unit 110 is, for example, a wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), infrared communication, or a mobile communication network (4G (fourth generation mobile communication system), Communication using 5G (fifth generation mobile communication system) or the like can be performed.
 (制御部120)
 制御部120は、演算処理装置および制御装置として機能し、各種プログラムに従ってユーザ端末10内の動作全般を制御する。制御部120は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。また、制御部120は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、及び適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
(control unit 120)
The control unit 120 functions as an arithmetic processing device and a control device, and controls overall operations within the user terminal 10 according to various programs. The control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like. The control unit 120 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
 本実施形態による制御部120は、仮想空間サーバ20から送信(例えばストリーミング配信)された、仮想空間におけるユーザ視点の映像を、表示部160に表示する制御を行う。また、制御部120は、仮想空間サーバ20から、上記ユーザ視点の映像と共に送信される音声信号を、音声出力部170から再生する制御も行う。また、制御部120は、操作入力部130や、モーションセンサ140、位置測位部150により取得された情報を、通信部110から仮想空間サーバ20に送信する制御を行う。例えば操作入力部130からは様々な操作情報が入力され、仮想空間に対するユーザ操作の入力情報として、仮想空間サーバ20に送信される。また、モーションセンサ140により取得されたモーションデータは、アバターの位置、姿勢(顔の向き等)を制御する情報として、仮想空間サーバ20に送信され得る。なお、ここでは一例として、ユーザが仮想空間の映像を視聴する装置(アバターの操作に用いる装置)と、アバター非操作の期間中にアバターの自律行動を生成するために用いられるユーザの実空間のセンシングデータを仮想空間サーバ20に送信する装置とが同じ装置である場合を前提として説明しているが、これらは異なる装置であってもよい。 The control unit 120 according to the present embodiment performs control to display on the display unit 160 the video from the user's viewpoint in the virtual space, which is transmitted (for example, streamed) from the virtual space server 20 . The control unit 120 also controls the reproduction of the audio signal transmitted from the virtual space server 20 together with the video of the user's viewpoint from the audio output unit 170 . Further, the control unit 120 controls transmission of information acquired by the operation input unit 130 , the motion sensor 140 , and the positioning unit 150 from the communication unit 110 to the virtual space server 20 . For example, various operation information is input from the operation input unit 130 and transmitted to the virtual space server 20 as input information for user operations on the virtual space. Also, motion data acquired by the motion sensor 140 can be transmitted to the virtual space server 20 as information for controlling the position and posture (orientation of the face, etc.) of the avatar. Here, as an example, a device (device used to operate the avatar) for the user to view the video in the virtual space and a device for the user's real space used to generate the autonomous behavior of the avatar during the non-operating period of the avatar. Although the description is based on the assumption that the device that transmits the sensing data to the virtual space server 20 is the same device, these may be different devices.
 また、本実施形態による制御部120は、状態認識部121としても機能する。状態認識部121は、モーションセンサ140により取得されたユーザのセンシングデータに基づいて、ユーザの状態を認識する。ユーザの状態とは、例えば、歩く、走る、立つ、座る、寝る等である。状態認識部121により認識されたユーザの状態は、制御部120により仮想空間サーバ20に送信され、仮想空間サーバ20において、アバターの自律行動を生成する際に用いられる。また、位置測位部150により取得された位置情報も、仮想空間サーバ20において、アバターの自律行動を生成する際に用いられる。制御部120は、ユーザの状態を送信する際に、併せて位置情報を仮想空間サーバ20に送信するようにしてもよいし、位置の変化(移動)を検出した際に位置情報を送信するようにしてもよい。また、制御部120は、仮想空間のアバターをユーザが操作していない非操作の期間中に、ユーザの状態および位置情報を仮想空間サーバ20に送信するようにしてもよい。また、制御部120は、位置情報と地図情報を組み合わせて、ユーザが居る場所の名称を特定し、仮想空間サーバ20に送信してもよい。場所の名称は、一般的な名称としてもよい。例えばユーザが「〇〇市の〇〇公園」に居ることが特定された場合、単に「公園」とだけ通知するようにしてもよい。これによりユーザのプライバシーを保護し得る。地図情報は、予め記憶部180に記憶され得る。また、地図情報は、屋外の地図情報に限られず、学校内、会社内、デパート内、自宅内等の屋内の地図情報も含まれる。制御部120は、位置情報から、ユーザが自宅内の寝室やリビング等、いずれの部屋に居るかも特定し得る。 The control unit 120 according to this embodiment also functions as a state recognition unit 121. The state recognition unit 121 recognizes the user's state based on the user's sensing data acquired by the motion sensor 140 . The state of the user is, for example, walking, running, standing, sitting, or sleeping. The state of the user recognized by the state recognition unit 121 is transmitted to the virtual space server 20 by the control unit 120, and used in the virtual space server 20 when generating the autonomous action of the avatar. The location information acquired by the location positioning unit 150 is also used in the virtual space server 20 when generating autonomous actions of the avatar. The control unit 120 may transmit the location information to the virtual space server 20 together with the state of the user, or may transmit the location information when a change (movement) in the location is detected. can be Also, the control unit 120 may transmit the user's state and location information to the virtual space server 20 during a non-operation period in which the user does not operate the avatar in the virtual space. Also, the control unit 120 may specify the name of the place where the user is by combining the positional information and the map information, and transmit the name to the virtual space server 20 . The place name may be a general name. For example, if it is specified that the user is in "XX park in XX city", the user may simply be notified of "park". This may protect the user's privacy. Map information may be stored in the storage unit 180 in advance. The map information is not limited to outdoor map information, and includes indoor map information such as inside a school, inside a company, inside a department store, inside one's home, and the like. Control unit 120 can also identify which room the user is in, such as a bedroom or a living room, from the position information.
 なお、制御部120は、通信部110から取得した外部のセンサ(例えばユーザの周囲に設置されているカメラや、ユーザ端末10と別体でユーザに装着されているモーションセンサ等)により取得されたユーザのセンシングデータ(撮像画像、モーションデータ)を、ユーザの状態の認識や場所の特定に用いてもよいし、そのまま仮想空間サーバ20に送信してもよい。また、制御部120は、ユーザが把持するコントローラから受信した操作情報を、仮想空間サーバ20に送信してもよい。 In addition, the control unit 120 is obtained by an external sensor (for example, a camera installed around the user, a motion sensor attached to the user separately from the user terminal 10, etc.) obtained from the communication unit 110 The user's sensing data (captured image, motion data) may be used for recognizing the user's state and specifying the location, or may be transmitted to the virtual space server 20 as it is. Also, the control unit 120 may transmit the operation information received from the controller held by the user to the virtual space server 20 .
 (操作入力部130)
 操作入力部130は、ユーザによる操作指示を受付け、その操作内容を制御部120に出力する。操作入力部130は、例えばタッチセンサ、圧力センサ、若しくは近接センサであってもよい。あるいは、操作入力部130は、ボタン、スイッチ、およびレバーなど、物理的構成であってもよい。
(Operation input unit 130)
Operation input unit 130 receives an operation instruction from the user and outputs the operation content to control unit 120 . The operation input unit 130 may be, for example, a touch sensor, a pressure sensor, or a proximity sensor. Alternatively, the operation input unit 130 may be a physical configuration such as buttons, switches, and levers.
 (モーションセンサ140)
 モーションセンサ140は、ユーザの動きをセンシングする機能を有する。より具体的には、モーションセンサ140は、加速度センサ、角速度センサ、および地磁気センサを有していてもよい。さらに、モーションセンサ140は、3軸ジャイロセンサ、3軸加速度センサ、および3軸地磁気センサの合計9軸を検出可能なセンサであってもよい。ユーザの動きとは、ユーザの身体の動きや頭部の動きが挙げられる。より具体的には、モーションセンサ140は、ユーザが身に着けるユーザ端末10の動きを、ユーザの動きとしてセンシングする。例えばユーザ端末10がHMDにより構成され、頭部に装着されている場合、モーションセンサ140は、ユーザの頭部の動きをセンシングできる。また、例えばユーザ端末10がスマートフォンにより構成され、ポケットや鞄に入れられた状態でユーザが出歩いた場合、モーションセンサ140は、ユーザの身体の動きをセンシングできる。また、モーションセンサ140は、ユーザ端末10と別体で構成され、ユーザに装着されるウェアラブルデバイスであってもよい。
(motion sensor 140)
The motion sensor 140 has a function of sensing the motion of the user. More specifically, motion sensor 140 may have an acceleration sensor, an angular velocity sensor, and a geomagnetic sensor. Furthermore, the motion sensor 140 may be a sensor capable of detecting a total of 9 axes, including a 3-axis gyro sensor, a 3-axis acceleration sensor, and a 3-axis geomagnetic sensor. The motion of the user includes motion of the user's body and motion of the head. More specifically, the motion sensor 140 senses the movement of the user terminal 10 worn by the user as the movement of the user. For example, when the user terminal 10 is configured by an HMD and worn on the head, the motion sensor 140 can sense the movement of the user's head. Further, for example, when the user terminal 10 is configured by a smart phone and the user goes out while the user terminal 10 is in a pocket or bag, the motion sensor 140 can sense the movement of the user's body. Also, the motion sensor 140 may be a wearable device configured separately from the user terminal 10 and worn by the user.
 (位置測位部150)
 位置測位部150は、ユーザの現在位置を取得する機能を有する。なお本実施形態では、ユーザがユーザ端末10を所持していることを前提とし、ユーザ端末10の位置をユーザの現在位置とみなす。
(Position positioning unit 150)
The positioning unit 150 has a function of acquiring the current position of the user. In this embodiment, it is assumed that the user possesses the user terminal 10, and the position of the user terminal 10 is regarded as the current position of the user.
 具体的には、位置測位部150は、ユーザ端末10の絶対的または相対的な位置を算出する。例えば、位置測位部150は、外部からの取得信号に基づいて現在位置を測位してもよい。例えば人工衛星からの電波を受信して、ユーザ端末10が存在している位置を検知するGNSS(Global Navigation Satellite System)が用いられてもよい。また、GNSSの他、Wi-Fi(登録商標)、Bluetooth(登録商標)、携帯電話・PHS・スマートフォン等との送受信や、近距離通信等により位置を検知する方法が用いられてもよい。また、位置測位部150は、加速度センサや角速度センサ等の検出結果に基づいて相対的な変化を示す情報を推定してもよい。位置測位部150は、上記各種方法により、屋外位置測位、および屋内位置測位を行い得る。なお、位置には高度が含まれていてもよい。位置測位部150は、高度計を含んでいてもよい。 Specifically, the positioning unit 150 calculates the absolute or relative position of the user terminal 10 . For example, the position positioning unit 150 may position the current position based on an acquired signal from the outside. For example, a GNSS (Global Navigation Satellite System) that detects the position of the user terminal 10 by receiving radio waves from an artificial satellite may be used. In addition to GNSS, a method of detecting a position by transmitting/receiving with Wi-Fi (registered trademark), Bluetooth (registered trademark), a mobile phone/PHS/smartphone, or the like, short-distance communication, or the like may be used. Also, the positioning unit 150 may estimate information indicating relative changes based on the detection results of an acceleration sensor, an angular velocity sensor, or the like. The positioning unit 150 can perform outdoor positioning and indoor positioning using the various methods described above. Note that the position may include an altitude. Position positioning unit 150 may include an altimeter.
 (表示部160)
 表示部160は、仮想空間におけるユーザ視点の映像(画像)を表示する機能を有する。例えば表示部160は、液晶ディスプレイ(LCD:Liquid Crystal Display)、有機EL(Electro Luminescence)ディスプレイなどの表示パネルであってもよい。
(Display unit 160)
The display unit 160 has a function of displaying a video (image) of the user's viewpoint in the virtual space. For example, the display unit 160 may be a display panel such as a liquid crystal display (LCD) or an organic EL (Electro Luminescence) display.
 (音声出力部170)
 音声出力部170は、制御部120の制御に従って、音声信号を出力する。音声出力部170は、例えばヘッドフォン、イヤフォン、若しくは骨伝導スピーカとして構成されてもよい。
(Audio output unit 170)
Audio output section 170 outputs an audio signal under the control of control section 120 . The audio output unit 170 may be configured as headphones, earphones, or bone conduction speakers, for example.
 (記憶部180)
 記憶部180は、制御部120の処理に用いられるプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)により実現される。本実施形態による記憶部180には、例えば状態認識のためのアルゴリズムが格納されていてもよい。
(storage unit 180)
The storage unit 180 is implemented by a ROM (Read Only Memory) that stores programs, calculation parameters, and the like used in the processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate. The storage unit 180 according to this embodiment may store, for example, an algorithm for state recognition.
 以上、ユーザ端末10の構成について具体的に説明したが、本開示によるユーザ端末10の構成は図3に示す例に限定されない。 Although the configuration of the user terminal 10 has been specifically described above, the configuration of the user terminal 10 according to the present disclosure is not limited to the example shown in FIG.
 例えば、ユーザ端末10は、複数の装置により実現されてもよい。具体的には、モーションセンサ140や位置測位部150と、制御部120とが別体で構成されてもよい。 For example, the user terminal 10 may be realized by multiple devices. Specifically, the motion sensor 140 or the positioning unit 150 and the control unit 120 may be configured separately.
 また、ユーザ端末10は、さらに様々なセンサを有していてもよい。例えば、ユーザ端末10は、カメラ、マイクロホン、生体センサ(脈拍、心拍、発汗、血圧、体温、呼吸、筋電値、脳波等の検知部)、視線検出センサ、測距センサ等を有し、取得した情報を仮想空間サーバ20に送信してもよい。また、状態認識部121は、モーションデータだけでなく、例えば生体センサにより取得された生体データも考慮してユーザの状態(走る、歩く、寝る等)を認識してもよい。また、制御部120は、カメラにより取得されたユーザ周辺の撮像画像を解析し、ユーザの位置(ユーザが居る場所の名称)を特定してもよい。 In addition, the user terminal 10 may further have various sensors. For example, the user terminal 10 has a camera, a microphone, a biosensor (detection unit for pulse, heart rate, perspiration, blood pressure, body temperature, respiration, myoelectric value, electroencephalogram, etc.), a gaze detection sensor, a distance measurement sensor, etc., and obtains The information obtained may be transmitted to the virtual space server 20 . Also, the state recognition unit 121 may recognize the user's state (running, walking, sleeping, etc.) in consideration of not only motion data but also biometric data acquired by a biosensor, for example. In addition, the control unit 120 may analyze the captured image around the user acquired by the camera, and specify the user's position (the name of the place where the user is).
 <2-2.仮想空間サーバ20の構成例>
 図4は、本実施形態による仮想空間サーバ20の構成の一例を示すブロック図である。図4に示すように、仮想空間サーバ20は、通信部210、制御部220、および記憶部230を有する。
<2-2. Configuration example of virtual space server 20>
FIG. 4 is a block diagram showing an example of the configuration of the virtual space server 20 according to this embodiment. As shown in FIG. 4, the virtual space server 20 has a communication section 210, a control section 220, and a storage section 230. FIG.
 (通信部210)
 通信部210は、有線または無線により外部装置とデータの送受信を行う。通信部210は、例えば有線/無線LAN(Local Area Network)、Wi-Fi(登録商標)、Bluetooth(登録商標)、携帯通信網(LTE(Long Term Evolution)、4G(第4世代の移動体通信方式)、5G(第5世代の移動体通信方式))等を用いて、ユーザ端末10と通信接続する。
(Communication unit 210)
The communication unit 210 transmits and receives data to and from an external device by wire or wirelessly. The communication unit 210 is, for example, wired/wireless LAN (Local Area Network), Wi-Fi (registered trademark), Bluetooth (registered trademark), mobile communication network (LTE (Long Term Evolution), 4G (4th generation mobile communication method), 5G (fifth generation mobile communication method)), etc., to connect to the user terminal 10 for communication.
 (制御部220)
 制御部220は、演算処理装置および制御装置として機能し、各種プログラムに従って仮想空間サーバ20内の動作全般を制御する。制御部220は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。また、制御部220は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、及び適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
(control unit 220)
The control unit 220 functions as an arithmetic processing device and a control device, and controls overall operations within the virtual space server 20 according to various programs. The control unit 220 is implemented by an electronic circuit such as a CPU (Central Processing Unit), a microprocessor, or the like. The control unit 220 may also include a ROM (Read Only Memory) that stores programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate.
 また、本実施形態による制御部220は、アバター行動生成部221およびアバター制御部222としても機能する。 The control unit 220 according to this embodiment also functions as an avatar behavior generation unit 221 and an avatar control unit 222.
 アバター行動生成部221は、非操作の期間中に、実空間におけるユーザのセンシングデータに基づいて、アバターの自律行動を生成する機能を有する。また、アバター制御部222は、アバター行動生成部221により生成された自律行動に従ってユーザのアバターを制御する機能を有する。これにより、非操作の期間中であっても、アバターが仮想空間から突然消失したり、全く動かなくなるといった不自然な状況を回避することができる。また、ユーザのセンシングデータとは、例えば、ユーザの状態および位置情報の少なくともいずれかである。センシングデータは、ユーザ端末10から送信され得る。なお、ユーザ端末10からは、モーションセンサ140や位置測位部150で検出された情報がそのまま送信されてもよいし、情報に基づいて認識した認識結果が送信されてもよい。ユーザの状態は、上述したように、モーションデータから認識され得る。かかる認識は、ユーザ端末10で行われてもよいし、仮想空間サーバ20の制御部220で行われてもよい。位置情報は、場所の名称であってもよい。本実施形態では、実空間におけるユーザの状態や位置情報をアバターの自律行動に反映させることで、非操作の期間中におけるアバターの自律行動に対するユーザの違和感を低減することができる。ユーザの違和感とは、例えば実際の自分の行動とは全く関係なく、自分のアバターが勝手な行動を取っている場合に、メタバースと称されるような身近な仮想空間に対して生じる恐れがある違和感である。 The avatar behavior generation unit 221 has a function of generating the avatar's autonomous behavior based on the user's sensing data in the real space during the non-operation period. Also, the avatar control unit 222 has a function of controlling the user's avatar according to the autonomous behavior generated by the avatar behavior generation unit 221 . As a result, it is possible to avoid an unnatural situation in which the avatar suddenly disappears from the virtual space or does not move at all even during a period of non-operation. Also, the user's sensing data is, for example, at least one of the user's state and location information. Sensing data may be transmitted from the user terminal 10 . The information detected by the motion sensor 140 or the positioning unit 150 may be transmitted directly from the user terminal 10, or the recognition result recognized based on the information may be transmitted. A user's state can be recognized from the motion data, as described above. Such recognition may be performed by the user terminal 10 or by the control unit 220 of the virtual space server 20 . The location information may be the name of the place. In this embodiment, by reflecting the user's state and position information in the real space in the avatar's autonomous action, it is possible to reduce the user's sense of incongruity with respect to the avatar's autonomous action during the non-operation period. A user's sense of incongruity may occur in a familiar virtual space called the Metaverse when, for example, the user's avatar behaves arbitrarily and has nothing to do with his or her actual behavior. It is a sense of incongruity.
 以下、本実施形態によるアバター行動の生成について詳述する。 The generation of avatar actions according to this embodiment will be described in detail below.
 アバター行動生成部221は、例えばユーザをセンシングして得られたユーザ状態や位置情報から、アバター行動のデータベースを参照して、アバターの自律行動を生成してもよい。アバター行動のデータベースの一例を図5に示す。図5に示すように、アバター行動と、ユーザの状態や位置とを予め対応付けたデータベースを用いる。状態や位置(場所)は、アバター行動を構成する因子と言える。例えば図5に示す例では、「食事」という行動については、「座る」という状態と、「自宅のリビング」や「レストラン」という位置(場所)の因子で構成される。各アバター行動を構成する因子は、当該アバター行動と適合するユーザ行動における因子であることが望ましい。例えば、図5に示すように、「就寝」というアバター行動の因子としては、適合するユーザ行動である「就寝」を構成する「状態:寝る」、「位置:寝室」が挙げられる。なお、ここで定義されるアバター行動とは、仮想空間でアバターが行える行動である。図5に示すアバター行動と、状態や位置等の各因子との対応関係は一例であって、本実施形態はこれに限定されない。 The avatar action generation unit 221 may refer to a database of avatar actions based on the user state and position information obtained by sensing the user, for example, to generate autonomous actions of the avatar. An example of an avatar behavior database is shown in FIG. As shown in FIG. 5, a database is used in which avatar actions are associated in advance with user states and positions. It can be said that states and positions (locations) are factors that constitute avatar behavior. For example, in the example shown in FIG. 5, the action "eating" is composed of the state of "sitting" and the position (place) factors of "living room at home" and "restaurant". The factors that make up each avatar action are desirably factors in the user action that match the avatar action. For example, as shown in FIG. 5, factors of the avatar behavior "sleeping" include "state: sleep" and "position: bedroom" that constitute the matching user behavior "sleeping". Note that the avatar action defined here is an action that the avatar can perform in the virtual space. The correspondence relationship between the avatar behavior shown in FIG. 5 and each factor such as state and position is an example, and the present embodiment is not limited to this.
 まず、アバター行動生成部221は、ユーザ端末10から取得したユーザ状態や位置情報と、アバター行動のデータベースとをマッチングし、適合率を算出する。例えば、ユーザ端末10から取得した情報が、「状態:歩く」、「位置:ショップ」であった場合、図5のデータベースに挙げるアバター行動のうち「買い物」との適合率が100%、「敵を倒す」との適合率が50%と算出される。「敵を倒す」の状態因子と位置因子のうち、状態因子を満たすためである。この場合、アバター行動生成部221は、完全に対応する(適合率が100%の)アバター行動(ここでは、「買い物」)を、アバターの自律行動に決定してもよい。また、アバター行動生成部221は、完全に対応するアバター行動がなかった場合、対応する状態因子と位置因子の少なくともいずれかを含むアバター行動の候補の中から確率的に決定してもよい。この際、アバター行動生成部221は、より多くの適合する因子を含む候補の選択確率を高めることで、実空間でのユーザ行動に対して、より違和感の無い、自然な自律行動を決定することができる。なお、アバター行動生成部221は、完全に対応する(適合率が100%の)アバター行動がある場合でも、対応する状態因子と位置因子の少なくともいずれかを含むアバター行動の候補の中から確率的に決定するようにしてもよい。 First, the avatar action generation unit 221 matches the user state and position information acquired from the user terminal 10 with the avatar action database, and calculates the matching rate. For example, if the information obtained from the user terminal 10 is "state: walking" and "position: shop", the matching rate for "shopping" among the avatar actions listed in the database of FIG. The precision rate of "defeating" is calculated as 50%. This is to satisfy the condition factor of the condition factor and position factor of "defeat the enemy". In this case, the avatar action generation unit 221 may determine the avatar action (here, "shopping") that completely corresponds (with a matching rate of 100%) as the autonomous action of the avatar. Further, if there is no completely corresponding avatar action, the avatar action generation unit 221 may stochastically determine the avatar action candidate including at least one of the corresponding state factor and position factor. At this time, the avatar action generation unit 221 increases the selection probability of candidates that include more suitable factors, thereby determining natural autonomous actions that are less uncomfortable with respect to user actions in the real space. can be done. Note that even when there is a completely corresponding avatar action (with a matching rate of 100%), the avatar action generation unit 221 generates a probabilistic You may make it decide to.
 以上説明した状態因子および位置因子とアバター行動とを対応付けたデータベースを用いる手法では、実空間では行われないが仮想空間では行われる行動についても、自律行動として生成することが可能となる。一例として、図5のデータベースに挙げる「敵を倒す」というアバター行動が挙げられる。この行動は、敵を倒すという目的がある仮想空間で頻出する行動ではあるが、実空間では直接対応する行動がない。そこで本実施形態では、例えば「状態:歩く/立つ」、「位置:道路」といった因子で「アバター行動:敵を倒す」を構成する。これにより、実空間では行われず、仮想空間のみで行われる行動であっても、実空間のユーザのセンシングデータに基づいて、アバターの自律行動として選択することが可能となる。 With the above-described method that uses a database that associates state factors and position factors with avatar actions, it is possible to generate autonomous actions even for actions that are not performed in real space but are performed in virtual space. One example is the avatar action of "defeating the enemy" listed in the database of FIG. This behavior occurs frequently in the virtual space with the purpose of defeating the enemy, but there is no direct corresponding behavior in the real space. Therefore, in the present embodiment, for example, factors such as "state: walking/standing" and "position: road" configure "avatar action: defeating enemy". As a result, even an action that is not performed in the real space but is performed only in the virtual space can be selected as the autonomous action of the avatar based on the user's sensing data in the real space.
 また、図5に示す例では、完全に同一の因子によって構成されるアバター行動の候補がないが、完全に同一の因子を持つ複数のアバター行動を定義してもよい。この際、アバター行動生成部221は、完全に対応する(適合率が100%の)複数のアバター行動の候補の中から、確率的に選択する。 Also, in the example shown in FIG. 5, there are no candidates for avatar actions composed of exactly the same factors, but multiple avatar actions having exactly the same factors may be defined. At this time, the avatar action generation unit 221 selects probabilistically from a plurality of avatar action candidates that completely correspond (with a matching rate of 100%).
 また、図5に示す状態のカテゴリや位置のカテゴリは一例であって、本実施形態はこれに限定されない。これらのカテゴリが多様になればなるほど、より適切に、実空間のユーザの行動をアバター行動に反映させることが可能となる。また、アバター行動の候補についても、より多様で、かつ、状態と位置の因子が網羅的に構成されている候補を用意することで、より適切で自然な自律行動の生成が可能となる。 Also, the categories of states and categories of positions shown in FIG. 5 are examples, and the present embodiment is not limited to these. The more diverse these categories are, the more appropriately it is possible to reflect the user's behavior in the real space on the avatar's behavior. In addition, by preparing candidates for avatar actions that are more diverse and comprehensively composed of state and position factors, it is possible to generate more appropriate and natural autonomous actions.
 本実施形態によるアバター行動のデータベースは、複数の異なるサービスの仮想空間において再利用することが可能である。すなわち、図5に示すようなデータベースは、他の仮想空間において、ユーザ非操作の期間中におけるアバターの自律行動の生成の際に用いられてもよい。データベースの情報は、仮想空間同士の連携により共有され得る。また、各仮想空間において不適切な自律行動や、不足している自律行動がある場合、各仮想空間において、定義されるアバター行動を適宜修正、変更、追加することで、各仮想空間に適したデータベースを作成することが可能である。例えば構成する因子はそのままで、アバター行動「敵を倒す」を「畑を耕す」に変更する等が挙げられる。なお、仮想空間同士の連携により、ユーザのアバターは他の仮想空間へ移動することも可能である。また、仮想空間毎(サービス毎)にユーザのアバターが存在する場合も想定される。一例として、ユーザ端末10は、ユーザのセンシングデータを複数の仮想空間サーバ20に送信してもよい。これにより、複数の仮想空間において、各々データベースを参照して、実空間のユーザ行動をユーザのアバターの自律行動に反映させることが可能となる。 The avatar behavior database according to this embodiment can be reused in multiple virtual spaces for different services. That is, a database such as that shown in FIG. 5 may be used in generating an autonomous action of an avatar during a period of non-user operation in another virtual space. Information in the database can be shared by cooperation between virtual spaces. In addition, if there is an inappropriate autonomous action or an insufficient autonomous action in each virtual space, the avatar action defined in each virtual space can be corrected, changed, or added as appropriate to create an appropriate behavior for each virtual space. It is possible to create a database. For example, an avatar action of "defeating an enemy" may be changed to "cultivating a field" without changing the constituent factors. It is also possible for the user's avatar to move to another virtual space by cooperation between the virtual spaces. It is also assumed that a user's avatar exists for each virtual space (for each service). As an example, the user terminal 10 may transmit user sensing data to multiple virtual space servers 20 . This makes it possible to refer to databases in each of a plurality of virtual spaces and reflect the user behavior in the real space on the autonomous behavior of the user's avatar.
 以上、本実施形態では一例として因子を用いてアバターの自律行動を決定(生成)する方法について説明したが、実空間のユーザのセンシングデータを、仮想空間のユーザのアバターに適用できる行動形式に変換できるのであれば、方法は特に限定しない。例えば、実空間のユーザの状態や位置を正確に反映することができる仮想空間が存在するのであれば、検出されたそのままの情報を仮想空間のアバターへと適用してもよい。また、アバターに反映させる「ユーザのセンシングデータ」は、状態や位置に限定されない。また、「状態」は、モーションデータに基づいて認識される状態に限定されない。例えば、ユーザ端末10のマイクロホンで収音されたユーザの発話音声(会話)や、生体情報(心拍数、血圧、体温等)に基づいて認識される状態であってもよい。 As described above, in the present embodiment, a method for determining (generating) the autonomous behavior of an avatar using factors has been described as an example. If possible, the method is not particularly limited. For example, if there is a virtual space that can accurately reflect the state and position of the user in the real space, the detected information may be applied to the avatar in the virtual space as it is. Also, the "user's sensing data" to be reflected in the avatar is not limited to the state or position. Also, the "state" is not limited to states recognized based on motion data. For example, the state may be recognized based on the user's uttered voice (conversation) picked up by the microphone of the user terminal 10 or biological information (heart rate, blood pressure, body temperature, etc.).
 (記憶部230)
 記憶部230は、制御部220の処理に用いられるプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)により実現される。本実施形態により記憶部230は、仮想空間の情報を格納する。
(storage unit 230)
The storage unit 230 is implemented by a ROM (Read Only Memory) that stores programs and calculation parameters used in the processing of the control unit 220, and a RAM (Random Access Memory) that temporarily stores parameters that change as appropriate. According to this embodiment, the storage unit 230 stores information on the virtual space.
 以上、仮想空間サーバ20の構成について具体的に説明したが、本開示による仮想空間サーバ20の構成は図4に示す例に限定されない。例えば、仮想空間サーバ20は、複数の装置により実現されてもよい。 Although the configuration of the virtual space server 20 has been specifically described above, the configuration of the virtual space server 20 according to the present disclosure is not limited to the example shown in FIG. For example, the virtual space server 20 may be realized by multiple devices.
 <<3.動作処理>>
 次に、本実施形態による仮想オブジェクトの処理の流れについて図面を用いて具体的に説明する。図6は、本実施形態による動作処理の流れの一例を示すシーケンス図である。なお、図6に示す処理は、ユーザがアバター操作を行っていない非操作の期間中(例えばログアウトした場合や、仮想空間の映像を表示する画面を閉じた場合、一定時間以上操作を行っていない場合等)に実施され得る。
<<3. Operation processing >>
Next, the flow of virtual object processing according to this embodiment will be specifically described with reference to the drawings. FIG. 6 is a sequence diagram showing an example of the flow of operation processing according to this embodiment. Note that the processing shown in FIG. 6 is performed during a non-operation period in which the user does not operate the avatar (for example, when the user logs out or closes the screen displaying the image of the virtual space, and the operation is not performed for a certain period of time). case, etc.).
 図6に示すように、まず、ユーザ端末10は、各センサにより、ユーザの動きおよび位置を取得する(ステップS103)。具体的には、モーションセンサ140によりユーザの動きが取得され、位置測位部150により位置情報が取得される。 As shown in FIG. 6, the user terminal 10 first acquires the movement and position of the user from each sensor (step S103). Specifically, the motion sensor 140 acquires the motion of the user, and the position measurement unit 150 acquires position information.
 次に、ユーザ端末10の状態認識部121は、ユーザの動き(モーションデータ)に基づいて、ユーザの状態を認識する(ステップS106)。 Next, the state recognition unit 121 of the user terminal 10 recognizes the user's state based on the user's movement (motion data) (step S106).
 次いで、ユーザ端末10は、位置情報(場所の一般名称であってもよい)および状態の認識結果を送信する(ステップS109)。 Next, the user terminal 10 transmits the location information (which may be the general name of the location) and the recognition result of the state (step S109).
 続いて、仮想空間サーバ20は、ユーザ端末10から受信した位置情報および状態の認識結果に基づいて、ユーザのアバター行動を生成する(ステップS121)。なお、仮想空間サーバ20のアバター行動生成部221は、位置情報および状態の認識結果の少なくともいずれかに基づいてアバター行動を生成してもよい。 Next, the virtual space server 20 generates the user's avatar behavior based on the location information and the status recognition result received from the user terminal 10 (step S121). Note that the avatar behavior generator 221 of the virtual space server 20 may generate avatar behavior based on at least one of the position information and the state recognition result.
 そして、仮想空間サーバ20のアバター制御部222は、アバター行動生成部221により生成(選択)されたアバター行動を、ユーザのアバターに適用し、制御する(ステップS124)。これにより、ユーザが非操作の期間中であっても、ユーザのアバターに自律行動させることができ、不自然さを低減することができる。また、実空間のユーザの行動をアバターの自律行動に反映させることで、自身のアバターのユーザの違和感や抵抗感を低減することができる。また、ユーザの動きや位置は、ユーザが所持するユーザ端末10(または身に着けるウェアラブルデバイス)により取得されることで、計測範囲の制限が緩和される。 Then, the avatar control unit 222 of the virtual space server 20 applies the avatar behavior generated (selected) by the avatar behavior generation unit 221 to the user's avatar and controls it (step S124). This allows the user's avatar to act autonomously even during a period in which the user is not operating, thereby reducing unnaturalness. In addition, by reflecting the behavior of the user in the real space in the autonomous behavior of the avatar, it is possible to reduce the sense of incongruity and resistance of the user of his/her own avatar. In addition, the movement and position of the user are acquired by the user terminal 10 (or the wearable device worn by the user), thereby alleviating restrictions on the measurement range.
 以上、本実施形態による動作処理の一例について説明した。なお、図6に示す動作処理は一例であって、一部の処理が、異なる順序や並列して実施されてもよい。 An example of operation processing according to the present embodiment has been described above. Note that the operation processing shown in FIG. 6 is an example, and part of the processing may be performed in a different order or in parallel.
 <<4.変形例>>
 <4-1.プライバシーを考慮したアバター行動の生成について>
 上述した実施形態では、実空間におけるユーザの状態や位置を、アバター行動に反映させることができる。しかしながら、仮想空間では不特定多数の他ユーザとコミュニケーションを行うため、プライバシーの考慮も重要となる。上述した実施形態においても、具体的なユーザの現在位置が把握されるわけではないが、状況によっては、より厳密にプライバシーを考慮した上でアバターの自律行動を生成する必要がある場合も想定される。
<<4. Modification>>
<4-1. Generation of avatar behavior considering privacy>
In the embodiments described above, the user's state and position in the real space can be reflected in the avatar's behavior. However, since communication is performed with an unspecified number of other users in the virtual space, consideration of privacy is also important. Even in the above-described embodiment, the current position of the user is not specifically grasped, but depending on the situation, it is assumed that it is necessary to generate the autonomous behavior of the avatar with more strict consideration of privacy. be.
 そこで、仮想空間サーバ20は、例えばアバターの自律行動として定義する各行動にプライバシーレベルを予め設定し、ユーザと他ユーザとの親しさに応じて許可したレベルまで行動を見せる(表示する)ようにしてもよい。下記表1は、アバターの自律行動に設定されるプライバシーレベルの一例である。 Therefore, the virtual space server 20 presets a privacy level for each action defined as, for example, an avatar's autonomous action, and shows (displays) the action up to a permitted level according to the familiarity between the user and other users. may Table 1 below is an example of the privacy level set for the autonomous action of the avatar.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 表1では、外出といったプライバシー性の高い「買い物」について、より高いプライバシーレベルが設定されている。プライバシーレベルは、ユーザが任意に設定してもよい。 In Table 1, a higher privacy level is set for "shopping" with high privacy, such as going out. The privacy level may be arbitrarily set by the user.
 また、ユーザは、仮想空間内の他ユーザに対して、どのレベルまでアバターの行動表現を許可するか(見せるか)を決定する。かかる許可は、他ユーザ毎に個別に設定してもよいし、予め他ユーザをグループ分けし、グループ毎に設定してもよい。例えば、親しい関係のユーザに対しては、最も高いプライバシーレベル(例えばレベル3)まで許可し、それ以外の以外の親しくない関係の他ユーザに対しては、最も低いプライバシーレベル(例えばレベル0)まで許可するようにしてもよい。プライバシーレベルによる制限で、アバターの自律行動の候補に該当する行動がない場合、アバター行動生成部221は、汎用行動を選択し得る。汎用行動とは、例えばアバター行動のデータベースに定義された自律行動の候補からランダムに選択される行動である。若しくは、汎用行動として用意された多数の自律行動の候補からランダムに選択される行動であってもよい。 In addition, the user determines to what level the avatar's behavioral expression is permitted (whether to be shown) to other users in the virtual space. Such permission may be individually set for each other user, or may be set for each group by grouping other users in advance. For example, users with close relationships are allowed up to the highest privacy level (e.g., level 3), and other users with no close relationships are allowed up to the lowest privacy level (e.g., level 0). You may allow it. If there is no action that corresponds to the candidates for the autonomous action of the avatar due to restrictions by the privacy level, the avatar action generation unit 221 can select the general-purpose action. A general-purpose action is, for example, an action that is randomly selected from autonomous action candidates defined in an avatar action database. Alternatively, the action may be randomly selected from a large number of autonomous action candidates prepared as general-purpose actions.
 図7は、本実施形態によるプライバシーレベルに応じて制限されるアバターの自律行動の表現例について説明する図である。例えば、ユーザAと親しい間柄のユーザB(アバター4b)に対してはプライバシーレベル3まで許可し、親しくないユーザC(アバター4c)に対してはプライバシーレベル0までの許可となっている場合を想定する。仮想空間サーバ20は、実空間のユーザからセンシングされたデータ(状態:歩く、場所:ショップ)に基づいて、図5に示すデータベースを参照し、ユーザのアバター4aに対して「買い物」の自律行動に決定する。次いで、表1に示すように、「買い物」のプライバシーレベルは「レベル3」であるため、プライバシーレベル3まで許可されているユーザBに対しては、アバター4aが「買い物」をしている様子を見せる。一方、プライバシーレベル0まで許可されているユーザCに対しては、「買い物」の行動のプライバシーレベル(レベル3)までは許可されていないため、アバター4aの汎用行動(例えば単に散歩している、家に居る等)を見せる。より具体的には、仮想空間サーバ20は、ユーザBやユーザCのユーザ端末に送信する各ユーザ視点の映像を生成する際に、アバター4aの自律行動をどのように表示するかを、プライバシーレベルに応じて制御する。 FIG. 7 is a diagram explaining an example of expression of autonomous actions of avatars that are restricted according to the privacy level according to this embodiment. For example, it is assumed that user B (avatar 4b), who has a close relationship with user A, is allowed up to privacy level 3, and user C (avatar 4c), who is not close to user A, is allowed up to privacy level 0. do. The virtual space server 20 refers to the database shown in FIG. 5 based on the data sensed from the user in the real space (state: walking, place: shop), and performs the autonomous action of "shopping" for the user's avatar 4a. to decide. Next, as shown in Table 1, since the privacy level of "shopping" is "level 3", for user B who is permitted up to privacy level 3, the avatar 4a is "shopping". show On the other hand, for user C, who is permitted up to privacy level 0, the behavior of "shopping" is not permitted up to privacy level (level 3). at home, etc.). More specifically, the virtual space server 20 determines how the autonomous behavior of the avatar 4a is displayed when generating the video of each user's viewpoint to be transmitted to the user terminals of the user B and the user C. control accordingly.
 (汎用行動について)
 上述した汎用行動は、ランダムに選択する方法以外にも、仮想空間における各アバターの行動履歴を利用した学習ベースに基づく選択方法であってもよい。図8は、本実施形態の変形例による汎用行動の生成について説明する構成図である。図8に示すアバター行動履歴DB182は、仮想空間における全アバターの行動の履歴(時間軸の情報を含む)を蓄積したデータベースである。アバター行動履歴DB182に蓄積される情報は、例えば実空間におけるユーザの行動が反映されたアバターの自律行動であってもよい。アバター行動生成部221は、汎用行動を生成する際、現在の時間軸情報と、アバター行動履歴DB182を参照し、対応する時間における各アバターの自律行動の割合情報を取得する。そしてアバター行動生成部221は、より割合の高い行動を汎用行動として決定(割合情報に基づいて確率的に選択)する。これにより、大多数のアバターが行う行動と同じ行動をアバターにさせることができ、より自然で目立たずに、ユーザのプライバシーも保護することができる。
(Regarding generic actions)
The above general-purpose action may be a selection method based on a learning base using the action history of each avatar in the virtual space, in addition to the method of selecting at random. FIG. 8 is a configuration diagram illustrating generation of a general-purpose action according to a modified example of this embodiment. The avatar action history DB 182 shown in FIG. 8 is a database that accumulates the action history (including time axis information) of all avatars in the virtual space. The information accumulated in the avatar action history DB 182 may be, for example, avatar autonomous actions that reflect the user's actions in real space. When generating a general-purpose action, the avatar action generation unit 221 refers to the current time axis information and the avatar action history DB 182, and acquires information on the percentage of autonomous actions of each avatar at the corresponding time. Then, the avatar action generation unit 221 determines (probabilistically selects based on the ratio information) the action with a higher percentage as the general-purpose action. As a result, the avatars can be made to behave in the same manner as most avatars do, and the user's privacy can be protected in a more natural and unobtrusive manner.
 また、汎用行動の他の選択方法として、図5を参照して説明したユーザ行動のデータベースとのマッチングを用いることも可能である。上述した例では、実空間でのユーザのセンシングデータ(状態、位置)とデータベースにおける各アバター行動の候補との適合率を算出して確率的に自律行動を決定していたが、自律行動を決定する際の適合率の閾値を調整したり、適合率を算出する際にノイズを付加することで、プライバシーを保護した上で、適切な自律行動を選択することが可能となる。なお、閾値の調整やノイズの強度調整により、プライバシー保護の強弱を調整することもできる。 As another method of selecting general-purpose actions, it is also possible to use matching with the database of user actions described with reference to FIG. In the above example, autonomous behavior was determined stochastically by calculating the matching rate between the user's sensing data (state, position) in the real space and each avatar behavior candidate in the database. By adjusting the threshold of the relevance rate when calculating the relevance rate or adding noise when calculating the relevance rate, it is possible to select an appropriate autonomous action while protecting privacy. It is also possible to adjust the strength of privacy protection by adjusting the threshold value and adjusting the intensity of noise.
 アバターに実空間のユーザとほぼ同等の自律行動を行わせるのか、完全にプライバシーを考慮して汎用的な行動を行わせるのか、また、その両方の特性を生かした行動を行わせるのか、各ユーザが任意に選択してもよい。 It is up to each user to decide whether to make the avatar perform autonomous actions similar to those of users in real space, whether to make the avatars perform general-purpose actions with complete consideration of privacy, or whether to make the avatars perform actions that take advantage of the characteristics of both. may be arbitrarily selected.
 以上説明したように、本変形例では、世界中の不特定多数のユーザとコミュニケーションが行える可能空間において、プライバシーの保護が可能となる。また、ユーザ毎に、許可するプライバシーレベルを設定することが可能となる。 As explained above, in this modified example, it is possible to protect privacy in a space where communication with an unspecified number of users around the world is possible. Also, it is possible to set a permitted privacy level for each user.
 <4-2.自律行動を利用するユーザへの仮想空間内での報酬について>
 仮想空間サーバ20は、アバターの自律行動に報酬を設定してもよい。例えば、買い物行動の場合は、仮想空間内で使うことが可能なアイテムの獲得、仕事や敵を倒す行動の場合は、仮想空間内での経験値や通貨等の獲得、自宅での行動の場合は、仮想空間内で利用する体力の回復、といった報酬が挙げられる。また、自律行動により仮想空間内をアバターが移動した場合、その移動情報やアバター視点の映像を記録し、ユーザによる操作が再開された際に、その映像等を確認することができるといった報酬であってもよい。
<4-2. About Rewards in Virtual Space for Users Using Autonomous Behavior>
The virtual space server 20 may set a reward for the avatar's autonomous action. For example, in the case of shopping actions, items that can be used in the virtual space are acquired, in the case of work or defeating enemies, experience points and currency are acquired in the virtual space, and in the case of actions at home , includes rewards such as recovery of physical strength used in virtual space. In addition, when the avatar moves in the virtual space by autonomous action, the movement information and the image of the avatar's viewpoint are recorded, and when the user resumes the operation, the image etc. can be confirmed. may
 このような報酬により、自律行動制御を利用するユーザ数の増加を促進することができる。 Such rewards can promote an increase in the number of users who use autonomous behavior control.
 <4-3.タイムゾーンの情報を考慮した汎用行動の生成について>
 仮想空間は実空間と異なり、ユーザ間の距離の影響が少なく、世界中のユーザと容易にコミュニケーションを取ることが可能である。しかし、実空間のユーザの行動をアバターの自律行動に反映させた場合、異なるタイムゾーンのユーザによる統一性のないアバター行動が共存する場合がある。そこで、仮想空間サーバ20のアバター行動生成部221は、視聴ユーザのタイムゾーンに合わせて、アバターの自律行動の表現を制御し、タイムゾーンが違うことによる不自然さを低減することができる。
<4-3. Generation of general-purpose actions considering time zone information>
Unlike real space, virtual space is less affected by the distance between users, and it is possible to easily communicate with users all over the world. However, when the behavior of the user in the real space is reflected in the autonomous behavior of the avatar, the avatar behavior of users in different time zones may coexist without consistency. Therefore, the avatar action generation unit 221 of the virtual space server 20 can control the expression of the avatar's autonomous action according to the time zone of the viewing user, and reduce the unnaturalness caused by the different time zones.
 より具体的には、まず、仮想空間サーバ20は、タイムゾーン毎のアバター行動履歴DB(時間軸情報を含む)を用意する。次いで、例えば夜の時間帯となるタイムゾーンに居るユーザA(視聴ユーザ)に対しては、ユーザAに見える(非操作の)他ユーザアバターの汎用行動を生成する際、ユーザAのタイムゾーンに合った行動(例えば「就寝」)をアバター行動履歴DBから抽出する。 More specifically, first, the virtual space server 20 prepares an avatar action history DB (including time axis information) for each time zone. Next, for user A (viewing user) who is in a time zone that is, for example, the night time zone, when generating general-purpose actions of other user avatars (non-operating) visible to user A, A matching action (for example, "sleep") is extracted from the avatar action history DB.
 <4-4.実空間の情報をアバター容姿に反映させる方法について>
 仮想空間サーバ20は、実空間におけるユーザの情報を、アバターの容姿に反映させてもよい。例えば、図5を参照して説明したようなアバター行動の各候補に、それぞれアバターの容姿を対応付けてもよい。例えば「就寝」であればパジャマ、「仕事」であればスーツ等である。仮想空間サーバ20のアバター制御部222は、アバターの自律行動を制御する際、その容姿も適宜変更することができる。各行動と容姿が対応しているため、上述したプライバシー保護の場合における行動生成の際にも(汎用行動の生成)、同様に容姿変更が行われ得る。
<4-4. About how to reflect the information of the real space to the appearance of the avatar >
The virtual space server 20 may reflect the user's information in the real space on the appearance of the avatar. For example, each candidate for the avatar action as described with reference to FIG. 5 may be associated with the appearance of the avatar. For example, pajamas for "sleep" and suits for "work". The avatar control unit 222 of the virtual space server 20 can appropriately change the appearance of the avatar when controlling the autonomous action of the avatar. Since each action corresponds to the appearance, the appearance can be changed in the same way when the action is generated in the case of privacy protection as described above (generation of general-purpose action).
 <<5.補足>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本技術はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<<5. Supplement >>
Although the preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, the present technology is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can conceive of various modifications or modifications within the scope of the technical idea described in the claims. are naturally within the technical scope of the present disclosure.
 なお、本実施形態では一例として仮想空間サーバ20からユーザ視点の映像をユーザ端末10に配信(より具体的には、例えばストリーミング配信)する場合を想定して説明するが、本実施形態による仮想空間を介したコミュニケーションを実現するシステムはこれに限定されない。例えば、各ユーザ端末10が仮想空間を生成し、仮想空間におけるユーザ視点の映像を生成、表示してもよい。仮想空間を生成するための情報は、仮想空間サーバ20から予め取得される。この場合、各ユーザ端末10は、ユーザによる操作入力の情報やセンシングデータ等を、リアルタイムで仮想空間サーバ20に送信する。次いで、仮想空間サーバ20は、ユーザ端末10から受信した、ユーザアバターの動き等に関する情報を、他のユーザ端末10に送信する制御を行う。仮想空間サーバ20は、必要に応じて、アバターの自律制御の情報も送信する。 Note that in the present embodiment, as an example, a case will be described where video from the user's viewpoint is distributed from the virtual space server 20 to the user terminal 10 (more specifically, for example, streaming distribution). The system that realizes communication via is not limited to this. For example, each user terminal 10 may generate a virtual space and generate and display an image of the user's viewpoint in the virtual space. Information for generating the virtual space is obtained in advance from the virtual space server 20 . In this case, each user terminal 10 transmits the information of the operation input by the user, sensing data, etc. to the virtual space server 20 in real time. Next, the virtual space server 20 controls the transmission of the information regarding the movement of the user avatar received from the user terminal 10 to other user terminals 10 . The virtual space server 20 also transmits avatar autonomous control information as needed.
 また、上述した仮想空間サーバ20に内蔵されるCPU、ROM、およびRAM等のハードウェアに、仮想空間サーバ20の機能を発揮させるための1以上のコンピュータプログラムも作成可能である。また、当該1以上のコンピュータプログラムを記憶させたコンピュータ読み取り可能な記憶媒体も提供される。 It is also possible to create one or more computer programs for causing hardware such as the CPU, ROM, and RAM built into the virtual space server 20 described above to exhibit the functions of the virtual space server 20. Also provided is a computer-readable storage medium storing the one or more computer programs.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Also, the effects described in this specification are merely descriptive or exemplary, and are not limiting. In other words, the technology according to the present disclosure can produce other effects that are obvious to those skilled in the art from the description of this specification, in addition to or instead of the above effects.
 なお、本技術は以下のような構成も取ることができる。
(1)
 ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部を備え、
 前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
情報処理装置。
(2)
 前記センシングデータは、ユーザの状態および位置の少なくともいずれかに関する情報を含む、前記(1)に記載の情報処理装置。
(3)
 前記制御部は、前記仮想オブジェクトの行動の候補と、1以上の状態または位置の少なくともいずれかを対応付けたデータベースを参照して、前記仮想オブジェクトの行動を生成する、前記(1)または(2)に記載の情報処理装置。
(4)
 前記制御部は、前記参照において、前記データベースで定義される各行動の候補と、前記センシングデータとの適合率を算出し、当該適合率に基づいて前記各行動の候補から一の行動を選択することで、前記仮想オブジェクトの行動を生成する、前記(3)に記載の情報処理装置。
(5)
 前記制御部は、前記各行動の候補に設定されたプライバシーレベルに応じて、前記仮想オブジェクトの行動を生成する、前記(3)または(4)に記載の情報処理装置。
(6)
 前記制御部は、前記各行動の候補から選択された一の行動のプライバシーレベルが、前記仮想オブジェクトであるアバターを視聴する他のユーザに許可されていないレベルの場合、前記アバターの行動として、汎用行動を生成する制御を行う、前記(5)に記載の情報処理装置。
(7)
 前記制御部は、前記汎用行動として、前記各行動の候補からランダムに選択する、前記(6)に記載の情報処理装置。
(8)
 前記制御部は、仮想空間における各アバターの行動履歴に基づいて、前記汎用行動を生成する、前記(6)に記載の情報処理装置。
(9)
 前記制御部は、前記仮想オブジェクトへの反映の際に、前記生成する行動に対応付けられる容姿に、前記仮想オブジェクトの容姿を変更する制御を行う、前記(1)~(8)のいずれか1項に記載の情報処理装置。
(10)
 前記制御部は、前記仮想空間におけるユーザ視点の画像を生成し、ユーザ端末に送信する制御を行う、前記(1)~(9)のいずれか1項に記載の情報処理装置。
(11)
 前記情報処理装置は、さらに通信部を備え、
 前記通信部は、前記センシングデータをユーザ端末から受信する、前記(1)~(10)のいずれか1項に記載の情報処理装置。
(12)
 プロセッサが、
 ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御することを含み、
 さらに、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行うことを含む、
情報処理方法。
(13)
 コンピュータを、
 ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部として機能させ、
 前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
プログラム。
Note that the present technology can also take the following configuration.
(1)
A control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
Information processing equipment.
(2)
The information processing apparatus according to (1), wherein the sensing data includes information on at least one of a user's state and position.
(3)
The control unit generates the behavior of the virtual object by referring to a database that associates candidates for the behavior of the virtual object with at least one of one or more states or positions, and (1) or (2) above. ).
(4)
In the reference, the control unit calculates a matching rate between each action candidate defined in the database and the sensing data, and selects one action from the action candidates based on the matching rate. The information processing apparatus according to (3), wherein the action of the virtual object is generated by doing so.
(5)
The information processing apparatus according to (3) or (4), wherein the control unit generates an action of the virtual object according to a privacy level set for each action candidate.
(6)
When the privacy level of one action selected from the candidates for each action is a level that other users viewing the avatar, which is the virtual object, are not permitted to view, the control unit selects a general-purpose action as the action of the avatar. The information processing device according to (5), which performs control to generate an action.
(7)
The information processing device according to (6), wherein the control unit randomly selects the general-purpose action from the candidates for each action.
(8)
The information processing device according to (6), wherein the control unit generates the general-purpose action based on the action history of each avatar in the virtual space.
(9)
Any one of (1) to (8) above, wherein the control unit performs control to change the appearance of the virtual object to the appearance associated with the action to be generated when reflecting on the virtual object. The information processing device according to the item.
(10)
The information processing apparatus according to any one of (1) to (9), wherein the control unit generates an image of the user's viewpoint in the virtual space and controls transmission to the user terminal.
(11)
The information processing device further comprises a communication unit,
The information processing device according to any one of (1) to (10), wherein the communication unit receives the sensing data from a user terminal.
(12)
the processor
Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation;
Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
Information processing methods.
(13)
the computer,
Functioning as a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
program.
 10 ユーザ端末
  110 通信部
  120 制御部
   121 状態認識部
  130 操作入力部
  140 モーションセンサ
  150 位置測位部
  160 表示部
  170 音声出力部
  180 記憶部
 20 仮想空間サーバ
 210 通信部
 220 制御部
  221 アバター行動生成部
  222 アバター制御部
 230 記憶部
10 user terminal 110 communication unit 120 control unit 121 state recognition unit 130 operation input unit 140 motion sensor 150 positioning unit 160 display unit 170 voice output unit 180 storage unit 20 virtual space server 210 communication unit 220 control unit 221 avatar behavior generation unit 222 Avatar control unit 230 storage unit

Claims (13)

  1.  ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部を備え、
     前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
    情報処理装置。
    A control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
    The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
    Information processing equipment.
  2.  前記センシングデータは、ユーザの状態および位置の少なくともいずれかに関する情報を含む、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the sensing data includes information on at least one of a user's state and position.
  3.  前記制御部は、前記仮想オブジェクトの行動の候補と、1以上の状態または位置の少なくともいずれかを対応付けたデータベースを参照して、前記仮想オブジェクトの行動を生成する、請求項1に記載の情報処理装置。 2. The information according to claim 1, wherein the control unit generates the behavior of the virtual object by referring to a database that associates candidates for the behavior of the virtual object with at least one of one or more states or positions. processing equipment.
  4.  前記制御部は、前記参照において、前記データベースで定義される各行動の候補と、前記センシングデータとの適合率を算出し、当該適合率に基づいて前記各行動の候補から一の行動を選択することで、前記仮想オブジェクトの行動を生成する、請求項3に記載の情報処理装置。 In the reference, the control unit calculates a matching rate between each action candidate defined in the database and the sensing data, and selects one action from the action candidates based on the matching rate. 4. The information processing apparatus according to claim 3, wherein the action of said virtual object is generated by doing so.
  5.  前記制御部は、前記各行動の候補に設定されたプライバシーレベルに応じて、前記仮想オブジェクトの行動を生成する、請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3, wherein the control unit generates an action of the virtual object according to a privacy level set for each action candidate.
  6.  前記制御部は、前記各行動の候補から選択された一の行動のプライバシーレベルが、前記仮想オブジェクトであるアバターを視聴する他のユーザに許可されていないレベルの場合、前記アバターの行動として、汎用行動を生成する制御を行う、請求項5に記載の情報処理装置。 When the privacy level of one action selected from the candidates for each action is a level that other users viewing the avatar, which is the virtual object, are not permitted to view, the control unit selects a general-purpose action as the action of the avatar. 6. The information processing device according to claim 5, which performs control to generate an action.
  7.  前記制御部は、前記汎用行動として、前記各行動の候補からランダムに選択する、請求項6に記載の情報処理装置。 The information processing apparatus according to claim 6, wherein the control unit randomly selects the general-purpose action from among the candidates for each action.
  8.  前記制御部は、仮想空間における各アバターの行動履歴に基づいて、前記汎用行動を生成する、請求項6に記載の情報処理装置。 The information processing apparatus according to claim 6, wherein the control unit generates the general-purpose action based on the action history of each avatar in the virtual space.
  9.  前記制御部は、前記仮想オブジェクトへの反映の際に、前記生成する行動に対応付けられる容姿に、前記仮想オブジェクトの容姿を変更する制御を行う、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit performs control to change the appearance of the virtual object to the appearance associated with the action to be generated when reflecting on the virtual object.
  10.  前記制御部は、前記仮想空間におけるユーザ視点の画像を生成し、ユーザ端末に送信する制御を行う、請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the control unit generates an image of the user's viewpoint in the virtual space and controls transmission to the user terminal.
  11.  前記情報処理装置は、さらに通信部を備え、
     前記通信部は、前記センシングデータをユーザ端末から受信する、請求項1に記載の情報処理装置。
    The information processing device further comprises a communication unit,
    The information processing apparatus according to claim 1, wherein said communication unit receives said sensing data from a user terminal.
  12.  プロセッサが、
     ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御することを含み、
     さらに、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行うことを含む、
    情報処理方法。
    the processor
    Controlling the behavior of a virtual object associated with the user in the virtual space according to the user's operation;
    Furthermore, during a period in which the user is not operating, the action of the virtual object is generated based on sensing data in the user's real space, and controlled to be reflected in the virtual object.
    Information processing methods.
  13.  コンピュータを、
     ユーザの操作に応じて仮想空間における前記ユーザに対応付けられる仮想オブジェクトの行動を制御する制御部として機能させ、
     前記制御部は、前記ユーザが非操作の期間中に、前記ユーザの実空間でのセンシングデータに基づいて前記仮想オブジェクトの行動を生成し、前記仮想オブジェクトに反映させる制御を行う、
    プログラム。
    the computer,
    Functioning as a control unit that controls the behavior of the virtual object associated with the user in the virtual space according to the user's operation,
    The control unit generates an action of the virtual object based on sensing data in a real space of the user during a period in which the user does not operate, and performs control to reflect the action on the virtual object.
    program.
PCT/JP2022/006581 2021-09-03 2022-02-18 Information processing device, information processing method, and program WO2023032264A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202280057465.3A CN117859154A (en) 2021-09-03 2022-02-18 Information processing device, information processing method, and program
JP2023545023A JPWO2023032264A1 (en) 2021-09-03 2022-02-18
EP22863854.0A EP4386687A1 (en) 2021-09-03 2022-02-18 Information processing device, information processing method, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-143758 2021-09-03
JP2021143758 2021-09-03

Publications (1)

Publication Number Publication Date
WO2023032264A1 true WO2023032264A1 (en) 2023-03-09

Family

ID=85411759

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/006581 WO2023032264A1 (en) 2021-09-03 2022-02-18 Information processing device, information processing method, and program

Country Status (4)

Country Link
EP (1) EP4386687A1 (en)
JP (1) JPWO2023032264A1 (en)
CN (1) CN117859154A (en)
WO (1) WO2023032264A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005216218A (en) * 2004-02-02 2005-08-11 Core Colors:Kk Virtual community system
JP2009140492A (en) 2007-12-06 2009-06-25 Internatl Business Mach Corp <Ibm> Method, system, and computer program for rendering real-world object and interaction into virtual world
JP2012511187A (en) * 2008-12-08 2012-05-17 ソニー オンライン エンタテインメント エルエルシー Online simulation and network applications
JP2014036874A (en) * 2007-10-22 2014-02-27 Avaya Inc Presentation of communication session in a virtual environment
JP2015505249A (en) * 2011-05-27 2015-02-19 マイクロソフト コーポレーション Avatar of a friend who plays a non-player character
JP2019061434A (en) * 2017-09-26 2019-04-18 株式会社コロプラ Program, information processing apparatus, information processing system, and information processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005216218A (en) * 2004-02-02 2005-08-11 Core Colors:Kk Virtual community system
JP2014036874A (en) * 2007-10-22 2014-02-27 Avaya Inc Presentation of communication session in a virtual environment
JP2009140492A (en) 2007-12-06 2009-06-25 Internatl Business Mach Corp <Ibm> Method, system, and computer program for rendering real-world object and interaction into virtual world
JP2012511187A (en) * 2008-12-08 2012-05-17 ソニー オンライン エンタテインメント エルエルシー Online simulation and network applications
JP2015505249A (en) * 2011-05-27 2015-02-19 マイクロソフト コーポレーション Avatar of a friend who plays a non-player character
JP2019061434A (en) * 2017-09-26 2019-04-18 株式会社コロプラ Program, information processing apparatus, information processing system, and information processing method

Also Published As

Publication number Publication date
EP4386687A1 (en) 2024-06-19
JPWO2023032264A1 (en) 2023-03-09
CN117859154A (en) 2024-04-09

Similar Documents

Publication Publication Date Title
JP6646620B2 (en) Wide-ranging simultaneous remote digital presentation world
JP7002684B2 (en) Systems and methods for augmented reality and virtual reality
US11080310B2 (en) Information processing device, system, information processing method, and program
JP6345282B2 (en) Systems and methods for augmented and virtual reality
CN109643161A (en) Dynamic enters and leaves the reality environment browsed by different HMD users
WO2014119098A1 (en) Information processing device, terminal device, information processing method, and programme
WO2014119097A1 (en) Information processing device, terminal device, information processing method, and programme
WO2023032264A1 (en) Information processing device, information processing method, and program
JP2023095862A (en) Program and information processing method
JP7375143B1 (en) Programs and information processing systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863854

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023545023

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 202280057465.3

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2022863854

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022863854

Country of ref document: EP

Effective date: 20240313

NENP Non-entry into the national phase

Ref country code: DE