CN112115398A - Virtual space providing system, virtual space providing method, and program - Google Patents

Virtual space providing system, virtual space providing method, and program Download PDF

Info

Publication number
CN112115398A
CN112115398A CN201910904463.3A CN201910904463A CN112115398A CN 112115398 A CN112115398 A CN 112115398A CN 201910904463 A CN201910904463 A CN 201910904463A CN 112115398 A CN112115398 A CN 112115398A
Authority
CN
China
Prior art keywords
virtual space
user
hand
fan
artist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910904463.3A
Other languages
Chinese (zh)
Inventor
钱锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Boyue Technology Co ltd
PULSE CO Ltd
Original Assignee
Beijing Boyue Technology Co ltd
PULSE CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Boyue Technology Co ltd, PULSE CO Ltd filed Critical Beijing Boyue Technology Co ltd
Publication of CN112115398A publication Critical patent/CN112115398A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

Provided is a virtual space providing technique that can strengthen the relationship between users communicating via a virtual space and can improve the quality of the user's cognitive experience. A virtual space providing system (100) for providing a virtual space includes: an object association unit (113) that associates at least one object with each of a plurality of users; an object determination unit (114) that determines a 2 nd object associated with a 2 nd user of the plurality of users, which is communicable with a 1 st object associated with a 1 st user of the plurality of users; a virtual space definition unit (115) that defines a virtual space that includes the 1 st object and the virtual viewpoint of the specified 2 nd object; an acquisition unit (116) that acquires performance information relating to the performance of each of the 1 st and 2 nd users; and an object control unit (117) that controls the 1 st object and the 2 nd object according to the expression based on the acquired expression information.

Description

Virtual space providing system, virtual space providing method, and program
Technical Field
The present invention relates to a virtual space providing system, a virtual space providing method, and a program for providing a virtual space.
Background
Non-patent document 1 discloses a technique for enabling communication between an artist object and fan objects associated with a plurality of fan fans that are unspecified as enthusiasts for the artist in a virtual space.
Documents of the prior art
Patent document
[ non-patent document 1]URL < Internet:https://twitter.com/wahekui_1/status/ 1119961062613237766
disclosure of Invention
Technical problem to be solved by the invention
In the conventional virtual space providing technique described in non-patent document 1, since an artist object communicates with a plurality of unspecified fan-shaped objects in a virtual space, even if the artist object communicates with a certain virtual fan, the communication may be disturbed by other fan-shaped objects. In this case, sufficient communication cannot be performed between one artist object and one specific fan object, and the relationship between the two may not be sufficiently established. Furthermore, fans want to be remembered by their own supported artists through communication in the virtual space. However, if the communication with the artist object is disturbed by other fan objects, such a requirement cannot be satisfied.
Accordingly, an object of the present disclosure is to provide a virtual space providing technique that can make the relationship between users communicating via a virtual space more firm and can improve the quality of the user's cognitive experience.
Technical scheme for solving technical problem
A virtual space providing system according to an aspect of the present invention is a virtual space providing system for providing a virtual space, including: an object association unit that associates at least one object with each of a plurality of users; an object determination unit that determines a 2 nd object associated with a 2 nd user of the plurality of users, which is communicable with a 1 st object associated with a 1 st user of the plurality of users; a virtual space definition unit that defines a virtual space including a 1 st object and a virtual viewpoint of the specified 2 nd object; an acquisition unit that acquires performance information relating to the performance of each of the 1 st user and the 2 nd user; and an object control unit that controls the 1 st object and the 2 nd object according to the expression based on the acquired expression information.
A virtual space providing method according to an aspect of the present invention is a virtual space providing method executed by a computer that provides a virtual space, including: associating at least one object with each of a plurality of users; a step of determining a 2 nd object associated with a 2 nd user of the plurality of users that can communicate with a 1 st object associated with a 1 st user of the plurality of users; a step of defining a virtual space including the 1 st object and the determined 2 nd object's virtual viewpoint; a step of acquiring performance information related to the performance of each of the 1 st user and the 2 nd user; and controlling the 1 st object and the 2 nd object according to the expression based on the acquired expression information.
A program according to an embodiment of the present invention causes a computer that provides a virtual space to function as: an object association unit that associates at least one object with each of a plurality of users; an object determination unit that determines a 2 nd object associated with a 2 nd user of the plurality of users, which is communicable with a 1 st object associated with a 1 st user of the plurality of users; a virtual space definition unit that defines a virtual space including a 1 st object and a virtual viewpoint of the specified 2 nd object; an acquisition unit that acquires performance information relating to the performance of each of the 1 st user and the 2 nd user; and an object control unit that controls the 1 st object and the 2 nd object according to the expression based on the acquired expression information.
In the present invention, "section" and "device" do not mean only physical means, but also include a case where the functions of the section and the device are realized by software. The functions of one "section" and "device" may be implemented by two or more physical units or devices, and the functions of two or more "sections" and "devices" may be implemented by one physical unit or device.
Effects of the invention
According to the mode of the invention, the relationship between the users communicating via the virtual space is firmer, and the quality of the cognitive experience of the users can be improved.
Drawings
Fig. 1 is a schematic configuration diagram of a virtual space providing system according to an embodiment of the present invention.
Fig. 2 is a diagram showing an example of an image displayed on the operator terminal according to the embodiment of the present invention.
Fig. 3(a) is a diagram showing an example of an image in a case where an artist object and one fan object communicate with each other in a virtual space, which is an image displayed by an artist terminal according to an embodiment of the present invention. (b) The figure is a diagram showing an example of an image in a case where an artist object and a plurality of fan objects can communicate with each other in a virtual space, which is an image displayed by an artist terminal according to an embodiment of the present invention.
Fig. 4 is a schematic configuration diagram of a VR (Virtual Reality) computer according to an embodiment of the present invention.
Fig. 5 is a schematic configuration diagram showing an example of a functional configuration of a VR computer according to the embodiment of the present invention.
Fig. 6 is a diagram conceptually showing an example of the virtual space according to the embodiment of the present invention.
Fig. 7 is a flowchart showing an example of the virtual space providing process according to the embodiment of the present invention.
Fig. 8 is a diagram for explaining an example of handshake in a virtual space according to the embodiment of the present invention. Fig. 8(a) is a diagram showing an example of the handshake operation of the artist and the fan in real space. (b) The drawings show an example of an image displayed by an HMD (Head Mounted Display) according to an embodiment of the present invention.
Fig. 9 is a diagram for explaining an example of a clapping in a virtual space according to the embodiment of the present invention. (a) The figure shows an example of a clapping action of an artist and a fan in real space. (b) The drawings show an example of an image displayed by the HMD according to the embodiment of the present invention.
Fig. 10 is a diagram for explaining another example of handshake session in the virtual space according to the embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings. However, the embodiments described below are merely examples, and are not intended to exclude the application of various modifications and techniques not shown below. That is, the present invention can be implemented in various modifications (such as a combination of the embodiments) within a scope not departing from the gist of the present invention. In the following description of the drawings, the same or similar components are denoted by the same or similar reference numerals. The drawings are schematic and do not necessarily correspond to actual dimensions, proportions, etc. The drawings also include portions having different dimensional relationships and ratios from each other.
Fig. 1 is a block diagram showing an overall configuration of a virtual space providing system 100 according to an embodiment of the present invention. First, an outline of the virtual space providing system 100 in the present embodiment will be described. An HMD (Head Mounted Display) 5a shown in fig. 1 provides a virtual space image VG based on a virtual space including an artist object TLO associated with an artist TL and a virtual viewpoint (corresponding to a camera included in HMD5a, for example) of a fan object FO1 associated with a fan Fa, which is a enthusiast of the artist TL residing in a studio and resides on the spot. In the example of fig. 1, the display unit 30 provided in the HMD5 displays a virtual space image VG indicating the state in which the artist object TLO and the fan object FO1 are shaking hands, as viewed from the viewpoint of the artist object TLO. The Virtual space image VG is an image generated by a VR (Virtual Reality) computer 1.
In the example of fig. 1, the fan noodles Fa and Fb have two names, but the number is not limited to this, and may be one or three or more. To explain the more specific processing, for example, the VR computer 1 specifies a fan object FO1 (2 nd object) corresponding to fan Fa among fan objects associated with fan Fa and Fb as a target for enabling communication (communication) by a conversation or action with the artist object TLO (1 st object) in the virtual space. Furthermore, the VR computer 1 defines a virtual space including the artist object TLO and the determined virtual viewpoint of the fan object FO 1. "communication" may include both the fan F and the artist TL communicating using speech or through actions via objects associated with each. Further, "communication" may include one of fan F or artist T simply observing the action of the other, or simply listening to the other.
Here, a specific example of "communication" between the vermicelli and the artist is given. The "communication" includes, for example, a handshake session in which an artist object and a specific fan object handshake in a virtual space as shown in the virtual space image VG, and a conversation can be enjoyed. The "communication" includes a clapping meeting where the artist object and a specific fan object clap in the virtual space, a cheering meeting where the fan object cheers the artist object after live (live) in the virtual space, a photo meeting where the artist object and the specific fan object take a photo in the virtual space, and the like. The handshake includes various handshake sessions such as a conveyor-based handshake session in which a plurality of fan-shaped objects move automatically in a virtual space as if they were sitting on a conveyor belt and sequentially handshake with artist objects.
The fan object may hold an item (item) in a live or handshake session of an artist or the like in the virtual space. The props comprise gifts such as fluorescent sticks of specific colors or bouquets and the like for artists. In addition, the prop may be a prop worn by the fan subject, for example, a T-shirt related to an artist, or the like.
An "object" contains an avatar based on artist TL or fan F. For example, the avatar contains an image that mimics the artist TL and fan F, or an image that represents a humanoid object corresponding to the artist TL and fan F. The object may be a simple character icon representing the artist TL or fan F by two ends or may be an elaborate character icon. The object may include an object corresponding to at least a portion of the body of the artist TL or fan F (e.g., whole body, head, hands, feet, etc.). Furthermore, the avatar may also contain images that mimic animals and the like associated with the artist TL and fan F.
The VR computer 1 acquires the expression information of the artist TL and the expression information of the fan Fa that is the communication target of the artist TL. The "expression information" includes information indicating the emotion or thought of the artist TL and the fan F by attitude, language, or the like. The expression information includes, for example, at least one of the actions and sounds of the artist TL and the fan F. The performance information may contain information related to expressions and mouth movements representing smiling faces, smiling, sadness, or the like.
The VR computer 1 generates a virtual space image reflecting the expression of the artist TL and fan Fa based on the acquired expression information. That is, the VR computer 1 generates a virtual space image every time the expressions of the artist TL and the fan Fa are changed.
Therefore, the fan Fa can communicate with the artist TL while checking the virtual space image VG displayed on the display unit 30 of the HMD5a for the movement of the object in the virtual space. The fan Fa may communicate with the artist TL by voice, in addition to the movement of the object in the virtual space. For example, the voice of the artist TL acquired by the microphone 3 is transmitted as voice information to the operator terminal 4. The sound information of the artist TL is sent to HMD5a via VR computer 1. A speaker (not shown) provided in the HMD5a outputs the sound of the artist TL. On the other hand, the sound of the fan Fa acquired by a microphone (not shown) provided in the HMD5a is transmitted to the VR computer 1 as sound information. The sound information of the fan Fa is transmitted to the headphone monitor 10 used by the artist TL via the operator terminal 4. The headphone monitor 10 outputs the sound of the fan Fa.
In this way, the virtual space providing system 100 provides a virtual space image that reflects the movement of the fan Fa and the artist TL in the real space and can grasp the movement of the fan object FO1 and the artist object TLO. The virtual space providing system 100 provides both voices to be output in synchronization with the operation of the fan object FO1 and the artist object TLO.
As described above, in the virtual space providing system 100, a fan object that can communicate with an artist object is specified, and a virtual space including the artist object and a virtual viewpoint of the specified fan object is defined. In the defined virtual space, at least a part of the objects (whole body object, hand object, head object, foot object, or the like) of the artist object and fan object is controlled based on the expression information. Therefore, when both devices communicate, interference by other fans (fan objects) can be prevented. Accordingly, the relationship between users communicating via the virtual space can be made more firm, and the quality of the cognitive experience of the users can be improved.
Hereinafter, a specific configuration of the virtual space providing system 100 will be described.
As shown in fig. 1, the virtual space providing system 100 illustratively includes: a camera 2c that acquires 3D image information (image information) generated by shooting a predetermined range including the artist TL; a microphone 3 for acquiring voice information of the artist TL; and an earphone monitor 10 for outputting the sound from the fan F. The virtual space providing system 100 illustratively includes an operator terminal 4, and the operator terminal 4 is the operator terminal 4 that receives the voice information of the artist TL from the microphone 3 and the image information of the artist TL from the camera 2c, and transmits the voice information and the image information of the artist TL to the VR computer 1 via the communication network N. The communication Network N is realized by, for example, the internet, a Network such as a mobile telephone Network, a LAN (Local Area Network), or a Network in which these are combined.
Fig. 2 is a diagram showing an example of an image displayed on the operator terminal 4 according to the embodiment of the present invention. The operator O shown in fig. 1 is a person who manages/operates the virtual space providing system 100. For example, the operator O can select a fan communicable with the artist TL or control the communication contents of both by operating the operator terminal 4.
As shown in fig. 2, on the display unit 400 of the operator terminal 4 operated by the operator O, an expression selection button B1 capable of selecting the contents of the expression of the artist object TLO corresponding to the expression information of the artist TL, and a scroll bar SB for controlling the degree of volume at which the sound of the fan F is output from the headphone monitor 10 used by the artist TL are displayed.
The display unit 400 displays a start button B3 for starting communication between a specific fan object and an artist object TLO, a forced end button B5 for forcibly ending communication between the specific fan object and the artist object TLO, and a period (for example, remaining time) TM during which communication between the specific fan object and the artist object TLO is possible. The instruction of the operator O (for example, pressing of each of the buttons B1, B3, and B5, or operation of the scroll bar SB) input by the operator terminal 4 is transmitted as instruction information to the VR computer 1 via the communication network N shown in fig. 1.
Further, the operator O can change the communication period of the artist object TLO and the fan object in the virtual space by operating the operator terminal 4. For example, a predetermined communication period may be changed. The remaining time after the start of communication may be increased or may be decreased. In this case, instruction information based on the operation of the operator O is transmitted from the operator terminal 4 to the VR computer 1, and the VR computer 1 executes the change processing of the communication period.
When there are a plurality of fan objects that may communicate with the artist object TLO, the operator O can change the order of communication by operating the operator terminal 4. In this case, instruction information based on the operation of the operator O is transmitted from the operator terminal 4 to the VR computer 1, and the VR computer 1 executes a process of changing the order of communication.
Returning to fig. 1, the virtual space providing system 100 illustratively includes: a VR computer 1, the VR computer 1 providing a virtual space including a virtual viewpoint of an artist object TLO associated with the artist TL and a fan object associated with the fan F; and an HMD5, the HMD5 displaying a virtual space image based on the virtual space provided by the VR computer 1. When the fan Fa and Fb are not distinguished from each other, part of the reference numerals are omitted and the fan F is simply referred to as "fan F". When HMD5a and 5b are not distinguished from each other and explained, some of the symbols are omitted and referred to as "HMD 5". In the example shown in fig. 1, the VR computer 1 is disposed outside the HMD5, but is not limited to this structure, and the VR computer 1 may be internally disposed in the HMD 5. In other words, HMD5 may have each function of VR computer 1 described later.
The HMD5 is worn on the head of the fan F, for example, and displays a virtual space image (e.g., a three-dimensional image) based on a virtual space corresponding to the movement of the head of the fan F. The display unit 30 of the HMD5 can display virtual space image information, text information, and the like transmitted from the VR computer 1. The display unit 30 may be disposed in front of the eyes of the vermicelli F. The display section 30 includes, for example, a liquid crystal display or an organic EL (Electro Luminescence) display. In addition, the display section 30 may include a transmissive or semi-transmissive display device. The display unit 30 can simultaneously display at least a part of the virtual space image and a predetermined range of the real space captured by a camera (not shown) provided in the HMD 5. Although the function of the display unit 30 of the HMD5 is not shown, a display unit provided in a mobile terminal such as a smartphone or a laptop computer, which is separate from the HMD5, may be used instead. Although the function of the display unit 30 of the HMD5 is not shown, a fixed display device such as a display monitor separate from the HMD5 may be used instead.
The virtual space providing system 100 illustratively further includes: a monitor terminal 6 used by a staff Sa who resides on the site; a live video distribution terminal 7 for distributing live video information obtained by shooting the situation of the live using the camera 2b to a live video display terminal 8 via a communication network N; a live video display terminal 8 that receives and displays the live video information released from the live video releasing terminal 7; and an artist terminal 9 that displays a virtual space image for the artist TL itself to confirm that the artist object associated with the artist TL communicates with the fan object in the virtual space.
When a plurality of fans F wearing HMD5 are present, for example, the monitoring terminal 6 can display a virtual space image based on the virtual viewpoint of the fan object on fan Fa and a virtual space image based on the virtual viewpoint of the fan object on fan Fb. Since the staff Sa can confirm the virtual space image viewed by each fan, it is possible to confirm whether or not there is a problem with the wearing state of the HMD5, for example, the orientation, inclination, or the like of the HMD5, of each fan F.
Fig. 3(a) is a diagram showing an example of the virtual space image VG1 when the artist object TLO and one fan object FO1 communicate in the virtual space, which is an image displayed on the display unit 900 of the artist terminal 9 shown in fig. 1. Fig. 3(b) is a diagram showing an example of the virtual space image VG2 when the artist object TLO and the plurality of fan objects FO1, FO2 can communicate in order in the virtual space, which is an image displayed on the display unit 900 of the artist terminal 9. As shown in fig. 3 a and 3 b, the virtual space image displayed on the display unit 900 of the artist terminal 9 may include a plurality of virtual space images corresponding to the visual fields viewed from respective positions (virtual viewpoints) in the virtual space corresponding to a certain conference site. The virtual space image displayed on the display unit 900 of the artist terminal 9 may be an image generated by the VR computer 1 and transmitted from the VR computer 1. The virtual space image displayed on the display 900 of the artist terminal 9 may be an image generated in the artist terminal 9.
The display section 900 of the artist terminal 9 displays information on the vermicelli F currently communicating with the artist TL in the virtual space, for example, the vermicelli ID, the name of the vermicelli, the face icon of the vermicelli, the emotion of the vermicelli, the color of the glow stick held by the vermicelli, the number of times of live broadcast participation of the vermicelli, the presence or absence of a gift sent to the artist TL by the vermicelli, the charge level of the vermicelli, and the like. The fan information may be displayed together with the virtual space image or may be displayed instead of the virtual space image.
Fig. 4 is a block diagram showing an example of a hardware configuration of a VR computer according to the embodiment of the present invention. As shown in fig. 4, the VR computer 1 illustratively includes a CPU (Central Processing Unit) 11, a Memory 12 including a ROM (Read Only Memory), a RAM (Random Access Memory), and the like, an input Unit 13, an output Unit 14, a recording Unit 15, a communication Unit 16, and a bus 17.
The CPU11 executes various processes in accordance with a program recorded in the memory 12 or a program loaded from the recording section 15 into the memory 12.
The memory 12 also stores data and the like necessary for the CPU11 to execute various processes as appropriate. The CPU11 and the memory 12 are connected to each other by a bus 17. The input unit 13, the output unit 14, the recording unit 15, and the communication unit 16 are connected to the bus 17.
The input unit 13 inputs various information in response to an instruction operation by a person (for example, the staff Sa or staff on the spot) who operates the VR computer 1. The input unit 13 is configured by various buttons, a touch panel, a microphone, or the like. The input unit 13 may be realized by an input device such as a keyboard or a mouse that is independent from the main body that houses the other parts of the computer 1.
The output unit 14 outputs image data and audio data. The output unit 14 is constituted by, for example, a display, a speaker, and the like. The image data and the sound data may be output as images and music from a display, a speaker, or the like in a manner recognizable by a user.
The recording unit 15 stores various data. The recording unit 15 is formed of a semiconductor Memory such as a DRAM (Dynamic Random Access Memory).
The communication unit 16 realizes communication with another device. For example, the communication unit 16 communicates with other terminals via the network N.
In the VR computer 1, a driver not shown is appropriately provided as necessary. The drive may be suitably equipped with removable media, consisting of, for example, magnetic disks, optical disks, opto-magnetic disks, and the like. Various data such as programs and image data read from the removable medium by the drive are installed in the recording section 15 as necessary.
The operator terminal 4, the monitor terminal 6, the live video distribution terminal 7, the live video display terminal 8, or the artist terminal 9 may include at least a part of the above-described respective configurations included in the VR computer 1.
Fig. 5 is a block diagram showing an example of a functional configuration of a VR computer according to the embodiment of the present invention. The computer 1 illustratively includes: an information processing section 111 that executes processing for providing a virtual space; and a Database (DB)112 that records various information required to perform the process and information on the result of the process. The information processing unit 111 can be realized by causing the CPU11 to execute a program stored in the memory 12 or the recording unit 15 shown in fig. 4, for example. The DB112 corresponds to, for example, the recording unit 15 shown in fig. 4.
The information processing section 111 functionally includes: an object association unit 113 that associates at least one object with each of a plurality of users including an artist and one or more fans; and an object specifying unit 114 for specifying a fan object (2 nd object) associated with a fan (2 nd user) among the plurality of users, the fan object (1 st object) being communicable with an artist (1 st user) among the plurality of users. The information processing section 111 functionally includes: a virtual space definition unit 115 that defines a virtual space including the artist object and the virtual viewpoint of the identified fan object; an acquisition unit 116 that acquires at least performance information related to the performance of each of the artists and fans; and an object control unit 117 for controlling the artist object and the fan object based on the acquired expression information and based on the expression. The details of the above functions will be described later.
The information processing section 111 further includes an image generating section 118. The image generation unit 118 generates a virtual space image based on the virtual space data.
Fig. 6 is a diagram conceptually showing an example of a virtual space according to the embodiment of the present invention. As shown in fig. 6, the virtual space 50 defined by the virtual space definition unit 115 includes, for example, a hemisphere with respect to the center point CP. In the virtual space 50, an XYZ coordinate system with the center point CP as the origin is defined. For example, when the HMD5 shown in fig. 1 is activated, the virtual camera 60 (virtual viewpoint) (corresponding to a camera included in the HMD5, for example) is disposed at the center point CP of the virtual space 50. Based on the image information acquired by the virtual camera 60, an image is displayed on the display unit 30 of the HMD 5. As shown in fig. 1, the virtual camera 60 can move in accordance with the movement (change in position and inclination) of the HMD5 worn on the head of the fan F in real space. That is, the virtual camera 60 in the virtual space 50 operates in accordance with the operation of the HMD5 in the real space.
Here, the position of the virtual camera 60 corresponds to the viewpoint of the fan F (user) in the virtual space 50. The image generation unit 16 defines a visual field region 80 in the virtual space 50 based on the reference line of sight SG from the virtual camera 60. For example, the visual field region 80 corresponds to a region seen by a fan F wearing the HMD5 in the virtual space 50.
In this way, the image generating unit 16 arranges the virtual camera 60 at the center point CP of the virtual space 50, for example, and sets the line of sight (reference line of sight SG) of the virtual camera 60 in the direction in which the fan F faces. Thus, the image generator 16 generates field-of-view image data for generating a field-of-view image (virtual space image VG 3). Then, when the motion of HMD5 changes, image generator 16 generates a new virtual space image corresponding to the change.
The virtual camera 60 is not limited to the camera of the HMD5 worn on the head of the fan F in fig. 1. For example, the virtual camera 60 may correspond to a camera of the HMD5 when the HMD5 is worn on the artist's head. As shown in fig. 3 a and 3 b, virtual space images corresponding to the fields of view from various viewpoints (virtual cameras) in a specific virtual space may be generated.
Returning to fig. 5, the DB112 illustratively records spatial information SI including one or more virtual space data templates for defining the virtual space 50 shown in fig. 6, object information OI relating to objects, and presentation information EI. In the DB112, the above-described various information can be recorded in association with each user based on information relating to the user.
< virtual space provision processing >
An example of the virtual space providing process according to the embodiment of the present invention will be described with reference to fig. 7 to 10. Fig. 7 is a flowchart showing an example of the virtual space providing process according to the embodiment of the present invention.
(step S1)
The object associating unit 113 of the VR computer 1 shown in fig. 5 associates objects with a plurality of users, respectively. Referring to the examples of fig. 1 and 3, the object associating unit 113 associates an artist object TLO with an artist TL, a fan object FO1 with a fan Fa, and a fan object FO2 with a fan Fb, for example, based on the object information OI shown in fig. 5.
(step S3)
The object specifying unit 114 of the VR computer 1 shown in fig. 5 specifies a fan object FO associated with a fan F among a plurality of users, which can communicate with the artist object TLO associated with the artist TL among the plurality of users. Referring to the example of fig. 1 and 3, the object specifying unit 114 specifies the fan object FO1 associated with the fan Fa as the communication target of the artist object TLO.
(step S5)
As shown in the example of the virtual space shown in fig. 6, the virtual space definition unit 115 of the VR computer 1 shown in fig. 5 defines a virtual space including the artist object TLO and the virtual viewpoint of the identified fan object FO1 (for example, corresponding to a camera provided in HMD5a used by fan Fa).
(step S7)
The acquisition unit 116 of the VR computer 1 shown in fig. 5 acquires at least performance information on the performance of the artist TL and fan Fa. First, the camera 2c shown in fig. 1 captures a predetermined range including the artist TL, generates 3D image information including the movement of the artist TL and the like, and outputs the 3D image information to the operator terminal 4 as expression information. The operator terminal 4 transmits the performance information to the VR computer 1 via the communication network N. The camera 2a captures a predetermined range including the fan Fa, generates 3D image information including the operation of the fan Fa, and outputs the 3D image information to the VR computer 1 as the expression information. The microphone 3 acquires the voice of the artist TL and outputs the voice information as the presentation information to the operator terminal 4. The operator terminal 4 transmits the performance information to the VR computer 1 via the communication network N. Further, for example, a microphone (not shown) provided in the camera 2a acquires the sound from the fan Fa, and outputs the sound information to the VR computer 1 as the expression information. The VR computer 1 may be provided with a microphone for acquiring the sound of the fan as a separate structure from the camera 2 a.
In this way, the acquisition unit 116 acquires the 3D image information and the sound information of the artist TL as the expression information of the artist TL, and acquires the 3D image information and the sound information of the fan Fa as the expression information of the fan Fa. The acquisition unit 116 may acquire the expression information of the artist TL and fan Fa periodically or at an arbitrary timing.
(step S9)
The object control unit 117 of the VR computer 1 shown in fig. 5 controls the artist object and the fan object based on the expression information based on the acquired expression information.
Fig. 8 is a diagram for explaining an example of handshake in a virtual space according to the embodiment of the present invention. Fig. 8(a) is a diagram showing an example of the handshake operation between the artist and the fan in real space. Fig. 8(b) is a diagram showing an example of an image displayed by the HMD according to the embodiment of the present invention. As shown in fig. 8(a), when the artist TL moves his hand H1 downward and the fan Fa moves his hand H2 downward in the real space and both perform a handshake operation, the handshake operation of the artist TL and the fan Fa is reflected in the artist object TLO and the fan object FO1 in the virtual space as shown in fig. 8 (b).
Specifically, the artist object TLO includes an object (3 rd object) corresponding to at least a portion of the body of the artist TL. The objects corresponding to at least a portion of the body of the artist TL include hand objects HO1 (hand 1 object) corresponding to the hands of the artist TL. Fan object FO1 includes an object (object No. 4) that corresponds to at least a portion of the body of fan Fa. The object corresponding to at least a part of the body of fan Fa includes a hand object HO2 (2 nd hand object) corresponding to the hand of fan Fa.
The object control unit 117 controls the contact state of the hand object HO1 and the hand object HO2 based on the direction or inclination of the hand object HO1 and the hand object HO2 when the hand object HO1 and the hand object HO2 are in contact with each other in the virtual space. More specifically, when the hand object HO1 and the hand object HO2 satisfy the predetermined condition when the hand object HO1 and the hand object HO2 are in contact with each other, the object control unit 117 performs control so that the contact state between the hand object HO1 and the hand object HO2 is maintained for a predetermined period.
For example, when the hand object HO1 and the hand object HO2 are positioned below the predetermined reference positions when the hand object HO1 and the hand object HO2 are in contact with each other, the object control unit 117 performs control so that the artist object and the specific fan object perform handshake. The predetermined reference position for the artist object TLO and the fan object FO to handshake may be a position where the handshake operation performed by the two objects appears natural. The predetermined reference position may be, for example, a position corresponding to the shoulder (shoulder object) of the artist object TLO for the hand object HO1, and a position corresponding to the shoulder (shoulder object) of the fan object FO for the hand object HO 2. The predetermined reference position is not limited to the above-described position, and may be a position corresponding to the chest of the artist object TLO and the fan object FO.
For example, the predetermined reference position may be set to be different positions for the artist object TLO and the fan object FO. For example, the reference position of the artist object TLO may be a position corresponding to the shoulder, and the reference position of the fan object FO may be a position corresponding to the chest.
The object control section 117 may consider other conditions as conditions for performing the handshake action. For example, the object control unit 117 may perform control such that the artist object and the specific fan object handshake when the height of the 1 st joint (the first joint from the fingertip) of the middle finger object is lower than the height of the 2 nd joint (the second joint from the fingertip) of the middle finger object among the plurality of finger objects provided in the hand objects HO1 and HO 2. In addition, fingers other than the middle finger among the plurality of finger objects may be employed to determine whether a condition for performing a handshake action is satisfied.
According to this configuration, it is possible to realize a handshake session in which an artist object and a specific fan object can handshake in a virtual space and enjoy a conversation or the like. Accordingly, the relationship between users communicating via the virtual space can be made more firm, and the quality of the cognitive experience of the users can be further improved. Further, by further referring to the position of the finger object on the basis of the position of the hand object, it is possible to determine whether or not to perform the handshake action with higher accuracy.
"when the hand object HO1 and the hand object HO2 make contact" includes before (e.g., just before), when, or after (e.g., just after) the two objects make contact. The "predetermined period" is not particularly limited, and may be, for example, several seconds or several tens of seconds. The predetermined period may be changed, and for example, the predetermined period may be changed by operating the operator terminal 4 with the operator O shown in fig. 1.
Fig. 9 is a diagram for explaining an example of a clapping in a virtual space according to the embodiment of the present invention. Fig. 9(a) is a diagram showing an example of a clapping action of an artist and a fan in real space. Fig. 9(b) is a diagram showing an example of an image displayed by the HMD according to the embodiment of the present invention. As shown in fig. 9(a), when the artist TL plays the action of directing its hand H1 upward and the fan Fa directs its hand H2 upward in real space, the playing action of the artist TL and the fan Fa is reflected in the artist object TLO and the fan object FO1 in virtual space as shown in fig. 9 (b).
When the hand object HO1 and the hand object HO2 satisfy the predetermined condition when the hand object HO1 and the hand object HO2 are in contact with each other, the object control unit 117 performs control so that the hand object HO1 and the hand object HO2 are separated immediately after the hand object HO1 and the hand object HO2 are in contact with each other.
For example, when the hand object HO1 and the hand object HO2 are positioned above the predetermined reference positions when the hand object HO1 and the hand object HO2 are in contact with each other, the object control unit 117 performs control so that the artist object and the specific fan-noodle object are spatialized. The predetermined reference positions for putting the artist object TLO and the fan object FO in a palm may be positions where the putting motions performed by the two objects appear natural. The predetermined reference position may be, for example, a position corresponding to the shoulder (shoulder object) of the artist object TLO for the hand object HO1, and a position corresponding to the shoulder (shoulder object) of the fan object FO for the hand object HO 2. The predetermined reference position is not limited to the above-described position, and may be a position corresponding to the chest of the artist object TLO and the fan object FO. For example, the predetermined reference position may be set to be different positions for the artist object TLO and the fan object FO. For example, the reference position of the artist object TLO may be a position corresponding to the shoulder, and the reference position of the fan object FO may be a position corresponding to the chest.
The object control section 117 may consider other conditions as conditions for performing the handshake action. For example, the object control unit 117 may control the artist object and the specific fan object to be spatialized when the height of the 1 st joint of the middle finger object is higher than the height of the 2 nd joint of the middle finger object among the plurality of finger objects provided in the hand objects HO1 and HO 2. In addition, fingers other than the middle finger among the plurality of finger objects may be employed to determine whether a condition for performing a handshake action is satisfied.
According to this configuration, it is possible to realize a clapping session in which an artist object and a specific fan object can be clapped in a virtual space and a conversation or the like can be enjoyed. Accordingly, the relationship between users communicating via the virtual space can be made more firm, and the quality of the cognitive experience of the users can be further improved. Further, by further referring to the position of the finger object in addition to the position of the hand object, it is possible to determine whether or not to perform the clapping action with higher accuracy.
Fig. 10 is a diagram for explaining an example of a conveyor-type handshake in a virtual space according to the embodiment of the present invention. As shown in fig. 10, in the conveyer-type handshake session, a plurality of fan-shaped objects FO0 to FO7 can perform handshake or conversation with the artist object TLO in sequence while automatically moving in the direction indicated by the broken-line arrow a, for example, as if they were sitting on the conveyer belt. Each of the fan-side objects FO0 to FO7 can perform communication when the distance from the artist object TLO is a fixed distance L. For example, in the example shown in fig. 10, the object specifying unit 114 shown in fig. 5 calculates the distances between the artist object TLO and each of the plurality of fan objects FO0 to FO7, and specifies a fan object FO1 located at a certain distance L from the artist object TLO. The virtual space definition unit 115 defines a virtual space in which the artist object TLO and the fan object FO1 can communicate with each other. The distance L may be changed as appropriate.
According to this configuration, the communication between the artist object and the fan object can be automated, the load on the operator can be reduced, and the number of fans that can communicate with the artist can be increased.
According to the embodiments described above, a fan object that can communicate with an artist object is specified, and a virtual space including the artist object and a virtual viewpoint of the specified fan object is defined. In the defined virtual space, at least some of the artist objects and fan objects are controlled based on the expression information. Therefore, when both devices communicate, interference by other fans (fan objects) can be prevented. Accordingly, the relationship between users communicating via the virtual space can be made more firm, and the quality of the cognitive experience of the users can be improved.
(other embodiments)
The above embodiments are intended to facilitate understanding of the present invention and are not to be construed as limiting the present invention. The present invention can be modified and improved (for example, the respective embodiments are combined and a part of the structure of the respective embodiments is omitted) without departing from the scope of the invention, and the invention also includes equivalents thereof.
As shown in fig. 3(b) and 8(b), identification information of fans (for example, text information T1, T2, T3 of fan names) may be displayed near fan objects associated with fans in the virtual space. Specifically, the acquisition unit 116 shown in fig. 5 acquires identification information of a fan, and the object control unit 117 arranges the identification information of the fan corresponding to the fan object in the vicinity of the fan object in the virtual space based on the acquired identification information. The identification information of the fan may be stored in the database 112 in advance, or may be input by the operator O or the like.
In the example of fig. 1, in the virtual space providing system 100, the VR computer 1 is commonly used by a plurality of HMDs 5, and a separate virtual space may be generally defined for each different fan. In addition, different VR computers may be used for the plurality of HMDs 5. Further, the virtual space providing system 100 provides a virtual space image based on a virtual space containing the artist object TLO associated with the artist TL and the determined virtual viewpoint of the fan object FO1 associated with the fan Fa. However, the virtual space providing system 100 may provide a virtual space image of a fan object that also includes a fan different from the fan Fa (for example, switch the displayed virtual space image) in response to a request from the operator O, the staff member Sa, the artist TL, or the fan F.
The virtual space providing system 100 includes a VR computer 1, an operator terminal 4, a monitor terminal 6, a live video distribution terminal 7, a live video display terminal 8, and an artist terminal 9. However, the virtual space providing system 100 may include a device that combines two or more of the above devices. For example, the live video display terminal 8 and the artist terminal 9 used by the artist TL may be one terminal device.
In the above embodiments, the communication between one artist TL and one or more fans F has been described, but the present invention is not limited to these embodiments. For example, the present invention may also be applied to communications between multiple artists TL and multiple fans F. That is, when a plurality of artist objects TLO and a plurality of fan objects FO are arranged in a certain virtual meeting place, in the virtual meeting place, for example, a virtual space X for communicating the artist X object and the fan X object can be provided, and a virtual space Y different from the virtual space X for communicating an artist Y object different from the artist X object and a fan Y object different from the fan X object can be provided.
As described above, the actions (expression information) of the artist TL and the fan F are acquired by using the cameras provided around the artist TL and the fan F, respectively, but the present invention is not limited to this embodiment. The actions of the artist TL and the fan F can be acquired by using controllers (not shown) respectively owned by the artist TL and the fan F. The controller may be configured to be worn on a portion of the body or clothing of the artist TL or fan F. The controls may be glove type controls corresponding to the hands of the artist TL or vermicelli F, or bar type controls. For example, a controller connected by wire or wirelessly to the VR computer 1 shown in fig. 1 includes a motion sensor. For example, the controller may receive an operation from the artist TL or fan F for controlling a position or an action of at least a portion of the object disposed in the virtual space. More specifically, the controller is mounted on the hand of the artist TL or the fan F, for example, and detects the motion of the hand of the artist TL or the fan F. For example, a motion sensor provided in the controller detects the rotational speed or rotational speed of the hand. The detection result is transmitted as a detection signal to the VR computer 1.
Description of the reference symbols
1 … VR computer, 2a, 2b, 2c … camera, 3 … microphone, 4 … operator terminal, 5a, 5b … HMD, 6 … monitoring terminal, 7 … live video distribution terminal, 8 … live video display terminal, 9 … artist terminal, 10 … headphone monitor, 11 … CPU, 12 … memory, 13 … input unit, 14 … output unit, 15 … recording unit, 16 … communication unit, 17 … bus, 30, 400, 900 … display unit, 100 … virtual space providing system, 111 … information processing unit, 112 … Database (DB), 113 … object association unit, 114 … object determination unit, 115 … virtual space definition unit, 116 … acquisition unit, 117 … object control unit, 36118 image generation unit.

Claims (11)

1. A virtual space providing system for providing a virtual space, comprising:
an object association unit that associates at least one object with each of a plurality of users;
an object determination unit that determines a 2 nd object associated with a 2 nd user of the plurality of users, which is communicable with a 1 st object associated with a 1 st user of the plurality of users;
a virtual space defining unit that defines a virtual space including the 1 st object and the determined virtual viewpoint of the 2 nd object;
an acquisition unit that acquires performance information relating to the performance of each of the 1 st user and the 2 nd user; and
and an object control unit that controls the 1 st object and the 2 nd object according to the expression based on the acquired expression information.
2. The virtual space providing system according to claim 1,
the 1 st object comprises a 3 rd object corresponding to at least a part of the 1 st user's body,
the 2 nd object comprises a 4 th object corresponding to at least a portion of the 2 nd user's body,
the object control unit controls a contact state of the 3 rd object and the 4 th object according to an orientation or an inclination of the 3 rd object and the 4 th object when the 3 rd object and the 4 th object are in contact with each other.
3. The virtual space providing system according to claim 2,
the 3 rd object is a 1 st hand object corresponding to the 1 st user's hand,
the 4 nd object is a 2 nd hand object corresponding to the 2 nd user's hand,
the object control unit controls the contact state of the 1 st hand object and the 2 nd hand object to be maintained for a predetermined period of time when the 1 st hand object and the 2 nd hand object are positioned below a predetermined reference position when the 1 st hand object and the 2 nd hand object are in contact with each other.
4. The virtual space providing system according to claim 2,
the 3 rd object is a 1 st hand object corresponding to the 1 st user's hand,
the 4 nd object is a 2 nd hand object corresponding to the 2 nd user's hand,
the object control unit controls the object control unit to separate the 1 st hand object and the 2 nd hand object immediately after the 1 st hand object and the 2 nd hand object are in contact with each other, when the 1 st hand object and the 2 nd hand object are positioned above a predetermined reference position.
5. The virtual space providing system according to any one of claims 1 to 4,
the acquisition section further acquires identification information of the 2 nd user,
the object control unit arranges identification information of the 2 nd user in the vicinity of the 2 nd object in the virtual space based on the acquired identification information.
6. The virtual space providing system according to any one of claims 1 to 4,
the object specifying unit changes the communication period of the 1 st object and the 2 nd object in the virtual space based on an instruction from an operator.
7. The virtual space providing system according to any one of claims 1 to 4,
the object determination section determines the 2 nd object based on a distance between the 1 st object and each of a plurality of objects.
8. The virtual space providing system according to any one of claims 1 to 4,
the object specifying unit changes an order in which a plurality of objects communicate with the 1 st object based on an instruction from an operator.
9. The virtual space providing system according to any one of claims 1 to 4,
the plurality of objects includes avatars respectively associated with the plurality of users.
10. A virtual space providing method executed by a computer for providing a virtual space, comprising:
associating at least one object with each of a plurality of users;
a step of determining a 2 nd object associated with a 2 nd user of the plurality of users that can communicate with a 1 st object associated with a 1 st user of the plurality of users;
a step of defining a virtual space including the 1 st object and the determined 2 nd object's virtual viewpoint;
acquiring performance information related to the performance of each of the 1 st user and the 2 nd user; and
controlling the 1 st object and the 2 nd object according to the performance based on the performance information acquired.
11. A program characterized by causing a computer that provides a virtual space to function as:
an object association unit that associates at least one object with each of a plurality of users;
an object determination unit that determines a 2 nd object associated with a 2 nd user of the plurality of users, which is communicable with a 1 st object associated with a 1 st user of the plurality of users;
a virtual space defining unit that defines a virtual space including the 1 st object and the determined virtual viewpoint of the 2 nd object;
an acquisition unit that acquires performance information relating to the performance of each of the 1 st user and the 2 nd user; and
and an object control unit that controls the 1 st object and the 2 nd object according to the expression based on the acquired expression information.
CN201910904463.3A 2019-06-20 2019-09-24 Virtual space providing system, virtual space providing method, and program Pending CN112115398A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019114718A JP6675136B1 (en) 2019-06-20 2019-06-20 Virtual space providing system, virtual space providing method and program
JP2019-114718 2019-06-20

Publications (1)

Publication Number Publication Date
CN112115398A true CN112115398A (en) 2020-12-22

Family

ID=70001042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910904463.3A Pending CN112115398A (en) 2019-06-20 2019-09-24 Virtual space providing system, virtual space providing method, and program

Country Status (2)

Country Link
JP (1) JP6675136B1 (en)
CN (1) CN112115398A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091136A (en) * 2023-01-28 2023-05-09 深圳市人马互动科技有限公司 Telephone marketing method and device based on speaker

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023162499A1 (en) * 2022-02-24 2023-08-31 株式会社Nttドコモ Display control device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1055257A (en) * 1996-08-09 1998-02-24 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional virtual space display method
US20120077582A1 (en) * 2010-09-24 2012-03-29 Hal Laboratory Inc. Computer-Readable Storage Medium Having Program Stored Therein, Apparatus, System, and Method, for Performing Game Processing
CN103635869A (en) * 2011-06-21 2014-03-12 英派尔科技开发有限公司 Gesture based user interface for augmented reality
JP2018075259A (en) * 2016-11-10 2018-05-17 株式会社バンダイナムコエンターテインメント Game system and program
JP6342024B1 (en) * 2017-02-10 2018-06-13 株式会社コロプラ Method for providing virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program
CN109478092A (en) * 2016-07-12 2019-03-15 富士胶片株式会社 The control device and its working method and working procedure of image display system and head-mounted display
CN109690447A (en) * 2016-08-09 2019-04-26 克罗普股份有限公司 Information processing method, for making computer execute the program and computer of the information processing method
US20190130644A1 (en) * 2017-10-31 2019-05-02 Nokia Technologies Oy Provision of Virtual Reality Content

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015192436A (en) * 2014-03-28 2015-11-02 キヤノン株式会社 Transmission terminal, reception terminal, transmission/reception system and program therefor
JP6869699B2 (en) * 2016-11-10 2021-05-12 株式会社バンダイナムコエンターテインメント Game system and programs
JP2018075260A (en) * 2016-11-10 2018-05-17 株式会社バンダイナムコエンターテインメント Game system and program
JP7355483B2 (en) * 2016-12-15 2023-10-03 株式会社バンダイナムコエンターテインメント Game systems and programs
JP2020027373A (en) * 2018-08-09 2020-02-20 パナソニックIpマネジメント株式会社 Communication method, program, and communication system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1055257A (en) * 1996-08-09 1998-02-24 Nippon Telegr & Teleph Corp <Ntt> Three-dimensional virtual space display method
US20120077582A1 (en) * 2010-09-24 2012-03-29 Hal Laboratory Inc. Computer-Readable Storage Medium Having Program Stored Therein, Apparatus, System, and Method, for Performing Game Processing
CN103635869A (en) * 2011-06-21 2014-03-12 英派尔科技开发有限公司 Gesture based user interface for augmented reality
CN109478092A (en) * 2016-07-12 2019-03-15 富士胶片株式会社 The control device and its working method and working procedure of image display system and head-mounted display
CN109690447A (en) * 2016-08-09 2019-04-26 克罗普股份有限公司 Information processing method, for making computer execute the program and computer of the information processing method
JP2018075259A (en) * 2016-11-10 2018-05-17 株式会社バンダイナムコエンターテインメント Game system and program
JP6342024B1 (en) * 2017-02-10 2018-06-13 株式会社コロプラ Method for providing virtual space, program for causing computer to execute the method, and information processing apparatus for executing the program
US20190130644A1 (en) * 2017-10-31 2019-05-02 Nokia Technologies Oy Provision of Virtual Reality Content

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张茂军等: "虚拟实景空间系统HVS的研究与实现", 计算机工程, no. 07, 20 July 1999 (1999-07-20) *
齐越等: "自由虚拟实景空间的构造", 北京航空航天大学学报, no. 05, 30 June 2003 (2003-06-30) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091136A (en) * 2023-01-28 2023-05-09 深圳市人马互动科技有限公司 Telephone marketing method and device based on speaker
CN116091136B (en) * 2023-01-28 2023-06-23 深圳市人马互动科技有限公司 Telephone marketing method and device based on speaker

Also Published As

Publication number Publication date
JP2021002145A (en) 2021-01-07
JP6675136B1 (en) 2020-04-01

Similar Documents

Publication Publication Date Title
US10262461B2 (en) Information processing method and apparatus, and program for executing the information processing method on computer
US11154771B2 (en) Apparatus and method for managing operations of accessories in multi-dimensions
US8788951B2 (en) Avatar customization
US10438394B2 (en) Information processing method, virtual space delivering system and apparatus therefor
US10545339B2 (en) Information processing method and information processing system
WO2020138107A1 (en) Video streaming system, video streaming method, and video streaming program for live streaming of video including animation of character object generated on basis of motion of streaming user
CN109885367B (en) Interactive chat implementation method, device, terminal and storage medium
EP2132650A2 (en) System and method for communicating with a virtual world
US20220233956A1 (en) Program, method, and information terminal device
US10896322B2 (en) Information processing device, information processing system, facial image output method, and program
WO2008109299A2 (en) System and method for communicating with a virtual world
US20180321817A1 (en) Information processing method, computer and program
US20220297006A1 (en) Program, method, and terminal device
JP2023103317A (en) Live communication system using character
CN112115398A (en) Virtual space providing system, virtual space providing method, and program
JP2019106220A (en) Program executed by computer to provide virtual space via head mount device, method, and information processing device
JP2018124981A (en) Information processing method, information processing device and program causing computer to execute information processing method
US20220323862A1 (en) Program, method, and information processing terminal
US20220241692A1 (en) Program, method, and terminal device
JP2019046250A (en) Program executed by computer to provide virtual space, method thereof, information processing apparatus for executing said program
JP6776425B1 (en) Programs, methods, and delivery terminals
JP6818106B1 (en) Programs, methods, and viewing terminals
JP2022000218A (en) Program, method, information processing device, and system
JP2021002318A (en) Virtual space providing system, virtual space providing method, and program
JP7333529B1 (en) Terminal device control program, terminal device, terminal device control method, server device control program, server device, and server device control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination