WO2023286727A1 - Virtual space provision device, virtual space provision method, and program - Google Patents

Virtual space provision device, virtual space provision method, and program Download PDF

Info

Publication number
WO2023286727A1
WO2023286727A1 PCT/JP2022/027216 JP2022027216W WO2023286727A1 WO 2023286727 A1 WO2023286727 A1 WO 2023286727A1 JP 2022027216 W JP2022027216 W JP 2022027216W WO 2023286727 A1 WO2023286727 A1 WO 2023286727A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual space
user
space
small space
user terminal
Prior art date
Application number
PCT/JP2022/027216
Other languages
French (fr)
Japanese (ja)
Inventor
徹 津田
Original Assignee
AxrossInvestors合同会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AxrossInvestors合同会社 filed Critical AxrossInvestors合同会社
Publication of WO2023286727A1 publication Critical patent/WO2023286727A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications

Definitions

  • the present invention relates to a virtual space providing device, a virtual space providing method, and a program.
  • a conventional virtual space one large space is open to the user, and the user operates an avatar that acts as a substitute for the user to move the avatar in the space, and the users corresponding to the avatar in the virtual space can interact with each other. It used SNS services such as chat. In this way, the structure of the virtual space itself was simple.
  • the purpose of the present invention is to provide a virtual space providing device, a virtual space providing method, and a program that use a new concept of virtual space.
  • the present invention is a virtual space providing device connected to a plurality of user terminals via a communication network, and is capable of providing small space information including information on the position and output mode of a small space arranged in a three-dimensional virtual space.
  • a small space mode determination means for determining an output mode of the small space by referring to the small space information storage section; and the output mode determined by the small space mode determination means.
  • the present invention relates to a virtual space providing apparatus comprising generated image output means for outputting to each user terminal a generated image in which the object generated by the object generation means is arranged in space.
  • the three-dimensional virtual space has a nested structure in which a further small space can be arranged within the small space, and the small space information storage unit further stores information about the nested structure.
  • the small space mode determining means determines at least one of display color and transparency of the small space to be different according to the number of objects belonging to the position of the small space.
  • an operation receiving means for receiving an operation signal for movement of the object corresponding to the user using the user terminal from the user terminal; and object moving means for moving the object.
  • the virtual space providing apparatus includes a user information storage unit that stores user information including authority information of the user, and the small space information storage unit further stores entry permission/prohibition information for the small space corresponding to the authority information. and when the object is moved to the position of the small space by the object moving means, the user information storage unit and the small space information storage unit are referred to to move the object into the small space.
  • internal movement determining means for determining whether or not movement is permitted, and the object moving means moves the object into the small space only when movement into the small space is permitted by the internal movement determining means; You can move it into
  • the generated image output means outputs the generated image related to the three-dimensional virtual space within a predetermined range from the position of the object corresponding to the user using the user terminal to the user terminal. outputting, according to the movement of the object by the object moving means, the three-dimensional virtual space within the predetermined range from the position of the object after movement, wherein the three-dimensional virtual space for the unoutput portion is generated; An image may be output to the user terminal in units of the small space.
  • sound control means for controlling sound output for each space including the three-dimensional virtual space and the small space; and sound output means for outputting the sound of the space controlled by the sound control means to the user terminal.
  • the virtual space providing apparatus further includes audio reception means for receiving audio from the user terminal, and the sound output means outputs the audio received by the audio reception means to another object in the space corresponding to the position of the object. It may be output to the user terminal of the user associated with the object.
  • the present invention also provides a method for providing a virtual space by a computer connected to a plurality of user terminals via a communication network, wherein the computer provides a position and an output mode of a small space arranged in a three-dimensional virtual space.
  • a small space mode determination step of determining an output mode of the small space by referring to a small space information storage unit storing small space information including information about the output mode of the output mode determined by the small space mode determination step; a virtual space generating step of generating the three-dimensional virtual space including a small space; an object generating step of generating an object associated with a user using each user terminal;
  • the present invention relates to a virtual space providing method including a generated image output step of outputting, to each user terminal, a generated image in which the object generated by the object generating step is arranged in the original virtual space.
  • the present invention provides a computer connected to a plurality of user terminals via a communication network as a small space information storing small space information including information on the position and output mode of a small space arranged in a three-dimensional virtual space.
  • a small space mode determining means for determining an output mode of the small space by referring to a space information storage; and generating the three-dimensional virtual space including the small space in the output mode determined by the small space mode determining means.
  • the present invention relates to a program for functioning as generated image output means for outputting a generated image in which objects are arranged to each user terminal.
  • FIG. 1 is an overall schematic diagram of a virtual space providing system and a functional block diagram of a virtual space providing server according to this embodiment;
  • FIG. FIG. 4 is a diagram showing an example of a user information storage unit of the virtual space providing server according to this embodiment;
  • FIG. 4 is a diagram showing an example of a room display condition storage unit of the virtual space providing server according to this embodiment;
  • 4 is a diagram showing an example of a room information storage unit of the virtual space providing server according to this embodiment;
  • FIG. 4 is a flowchart showing main processing of the virtual space providing server according to the embodiment;
  • FIG. 3A is a continuation.
  • FIG. 3A is a continuation.
  • FIG. 4 is a schematic diagram of an example of a virtual space generated by the virtual space providing server according to the embodiment; 6 is a flow chart showing use start processing of the virtual space providing server according to the present embodiment.
  • FIG. 4 is a diagram showing an example of an avatar generated by the virtual space providing server according to the embodiment; It is a figure which shows an example of the screen displayed on the user terminal which concerns on this embodiment.
  • 6 is a flow chart showing operation signal reception processing of the virtual space providing server according to the present embodiment.
  • 7 is a flow chart showing room display change processing of the virtual space providing server according to the present embodiment. It is a figure which shows an example of the screen displayed on the user terminal which concerns on this embodiment.
  • FIG. 10 is a diagram showing an example of an avatar generated by a virtual space providing server according to a modified embodiment; FIG.
  • FIG. 1 is an overall schematic diagram of a virtual space providing system 100 and a functional block diagram of a virtual space providing server 1 according to this embodiment.
  • 2A to 2C are diagrams showing examples of the storage unit 30 of the virtual space providing server 1 according to this embodiment.
  • a virtual space providing system 100 shown in FIG. 1 is a system comprising a virtual space providing server 1 (virtual space providing device) and user terminals 5 .
  • the virtual space providing system 100 provides the user with a communication space on the server by accessing the virtual space (three-dimensional virtual space) generated by the virtual space providing server 1 using the user terminal 5. It is a system that In the virtual space provided by the virtual space providing system 100, an avatar (object) that is operated by the user and acts on behalf of the user moves and communicates with avatars of other users.
  • the virtual space provided by the virtual space providing system 100 provides, for example, a place for communication by a plurality of users, and may be used for any purpose.
  • the virtual space may be, for example, for viewing decorations in the space, paintings installed in the space, etc., or for listening to sounds such as performances and speeches in the space. may be of
  • the virtual space provided by the virtual space providing system 100 is used for communication between users as an example.
  • the virtual space providing server 1 and the user terminal 5 are connected via a communication network N so as to be communicable.
  • the communication network N is, for example, an Internet line or the like, and may be wired or wireless.
  • the virtual space providing server 1 generates a virtual space, and transmits the viewpoint screen of the avatar to the user terminal 5 according to the operation of the user terminal 5 .
  • the viewpoint by the avatar may be a first-person viewpoint, which is a viewpoint as seen by the avatar itself, or may be a third-person viewpoint including the image of the avatar itself. Then, the viewpoint may be changed by the user.
  • the virtual space providing server 1 also transmits the voice received from the user terminal 5 to another user terminal 5 .
  • the virtual space providing server 1 is, for example, a web server. There is no limit to the number of pieces of hardware that make up the virtual space providing server 1 . One or more may be configured as necessary. Also, the virtual space providing server 1 may be, for example, a cloud.
  • the virtual space providing server 1 includes a control unit 10 , a storage unit 30 and a communication IF (interface) 39 .
  • the control unit 10 is a central processing unit (CPU) that controls the entire virtual space providing server 1 .
  • the control unit 10 reads and executes an operating system (OS) and application programs stored in the storage unit 30 as appropriate, thereby cooperating with the above-described hardware and executing various functions.
  • OS operating system
  • the control unit 10 includes a virtual space generation unit 11 (small space mode determination means, virtual space generation means), an avatar generation unit 12 (object generation means), an avatar processing unit 13 (operation reception means, object movement means), Avatar position determination unit 14 (internal movement determination means), generated image output unit 15 (generated image output unit), sound control unit 21 (sound control unit), voice reception unit 22 (voice reception unit), and sound output and a section 23 (sound output means).
  • a virtual space generation unit 11 small space mode determination means, virtual space generation means
  • an avatar generation unit 12 object generation means
  • an avatar processing unit 13 operation reception means, object movement means
  • Avatar position determination unit 14 internal movement determination means
  • generated image output unit 15 generated image output unit
  • sound control unit 21 sound control unit
  • voice reception unit 22 voice reception unit
  • sound output and a section 23 sound output means
  • the virtual space generation unit 11 generates a three-dimensional virtual space including small spaces called rooms.
  • the virtual space is image data generated by 3DCG.
  • the virtual space may contain multiple rooms. Further, a room may be included in the room.
  • a room is a space within a virtual space in which control contents can be set.
  • the virtual space providing server 1 can vary the control content for each room.
  • the virtual space generation unit 11 refers to the room information storage unit 33 (small space information storage unit), which will be described later, to determine the position of the room and the output mode.
  • the output mode relates to the display method of the room, and in addition to the display/non-display of the room, when the room is displayed, the inside of the room can be seen from outside the room. Including what you don't want or about.
  • the display color of the image indicating the boundary of the room is included.
  • the virtual space generation unit 11 may also generate a virtual space in which the display color of the image indicating the boundary of the room is changed to a different color based on the number of avatars in each room.
  • the virtual space generation unit 11 may refer to the room display condition storage unit 34, which will be described later, to determine the display color of the image indicating the boundary of the room.
  • the avatar generation unit 12 generates an avatar associated with the user using the user terminal 5 .
  • Avatars are image data generated by 3DCG.
  • the avatar will be described as an example of a person-shaped one. It should be noted that the avatar may be one in which the appearance color or the like can be changed by the user's settings.
  • the avatar processing unit 13 moves the avatar according to the operation signal.
  • the avatar position determining unit 14 determines the position of the avatar within the virtual space.
  • the avatar position determination unit 14 refers to the user information storage unit 32 and the room information storage unit 33, which will be described later, to determine the movement of the avatar into the room. is permitted or not.
  • the generated image output unit 15 outputs to the user terminal 5 a generated image in which the avatar generated by the avatar generation unit 12 is arranged in the virtual space generated by the virtual space generation unit 11 . More specifically, the generated image output unit 15 outputs a predetermined range of virtual space to the user terminal 5 .
  • a virtual space within a predetermined range is, for example, a virtual space that includes a room within a predetermined distance from the position of the user's own avatar. Then, when the avatar is moved by the avatar processing unit 13, the generated image output unit 15 includes a room in the virtual space within a predetermined range from the position of the avatar after the movement, and the virtual space that has not been output in the previous process. is output to the user terminal 5 in units of rooms.
  • the sound control unit 21 controls sound output for each space including a virtual space and a room. That is, the sound control unit 21 controls the output of sound, for example, in units of the smallest space. Therefore, the sound control unit 21 controls the sound of the space next to the space where the avatar is located so that it cannot be heard in the space.
  • the voice reception unit 22 receives voice from the user terminal 5 . For example, when the user speaks into a microphone (not shown) of the user terminal 5 , the voice reception unit 22 receives voice data transmitted by the user terminal 5 .
  • the sound output unit 23 outputs the sound of the space corresponding to the position of the avatar of the user of the user terminal 5 and controlled by the sound control unit 21 to the user terminal 5 of the user. Also, the sound output unit 23 outputs the user's voice received by the voice receiving unit 22 to the user terminal 5 of the user corresponding to the other avatar in the space corresponding to the position of the avatar of the user.
  • the storage unit 30 is a storage area such as a hard disk or a semiconductor memory device for storing programs, data, etc. necessary for the control unit 10 to execute various processes.
  • Storage unit 30 includes program storage unit 31 , user information storage unit 32 , room information storage unit 33 , and room display condition storage unit 34 .
  • the program storage unit 31 is a storage area that stores various programs.
  • the program storage unit 31 stores a virtual space providing program 31 a (program) for performing various functions executed by the control unit 10 of the virtual space providing server 1 .
  • the user information storage unit 32 is a storage area that stores information about users who use the virtual space.
  • FIG. 2A shows an example of the user information storage unit 32.
  • the user information storage unit 32 shown in FIG. 2A has items such as user ID (IDentification), user name, authority, room ID, and the like.
  • a user ID is identification information that identifies a user who uses this virtual space. After being authenticated by the user ID, the user can use the virtual space.
  • the user name is the name of the user, and may be a nickname or the like.
  • Authority indicates the user's rank, and includes general, premium, owner, etc. in this example.
  • the numbers (0 to 2) next to each authority are flags indicating ranks, with 0 representing general, 1 representing premium, and 2 representing owner. Premium indicates a special user such as a paying user.
  • the owner indicates the owner of a certain room.
  • the room ID is identification information for identifying a room owned by the owner when the authority is the owner.
  • the user whose user ID is U001 illustrated in FIG. 2A has general authority, and the user whose user ID is U002 has premium authority.
  • the premium authority may have, for example, a period during which the premium authority is continued.
  • the user with the user ID U003 has the authority of the owner in the room with the room ID R023, and has general authority in the other rooms.
  • the room information storage unit 33 is a storage area that stores information (small space information) about each room in the virtual space.
  • An example of the room information storage unit 33 is shown in FIG. 2C.
  • the room information storage unit 33 shown in FIG. 2C has items such as a room ID, a position, an output mode, an upper layer, permission authority (room entry permission/prohibition information), and the like.
  • Room ID is identification information for identifying a room.
  • the position indicates the position of the room by coordinates. In this example, the position of the room represents the position on the plane, but may also include the position in the height direction.
  • the output mode indicates the display mode of the room. If the output mode is transparent, it indicates that the image showing the boundary of the room is transparent.
  • the output mode is a wall, it indicates that an image with the boundary of the room as the wall is to be output.
  • the upper layer indicates the upper layer of the room.
  • a room with a room ID of R002 has an upper layer of R001, so the room with a room ID of R002 is formed inside the room with a room ID of R001.
  • the permission authority indicates authority to permit entry to the room. For example, anyone can enter a room with a room ID of R001, but a normal user cannot enter a room with a room ID of R002.
  • the room display condition storage unit 34 is a storage area that stores conditions for changing the display mode of the room based on the number of avatars in the room.
  • FIG. 2B shows an example of the room display condition storage unit 34. As shown in FIG.
  • the room display condition storage unit 34 shown in FIG. 2B stores conditions and output modes in association with each other.
  • the condition relates to the number of avatars in the room.
  • the output mode indicates the mode of the display color of the image showing the boundary of the room. In the example of FIG. 2B, colors indicate the number of avatars.
  • each storage unit of the storage unit 30 is merely an example, and there may be other storage units or other configurations. Moreover, the above items are only examples of the items in each storage unit, and there may be other items or there may be no items.
  • the communication IF 39 in FIG. 1 is an interface for communicating with the user terminal 5 or the like via the communication network N. FIG.
  • the user terminal 5 is, for example, a personal computer such as a notebook PC as shown in FIG.
  • the user terminal 5 may also be a mobile terminal such as a tablet or a smart phone, which also has the functions of a computer.
  • the user terminal 5 includes at least a control unit, a storage unit, an input unit and an output unit (or a touch panel display), and a communication IF.
  • An output device, a camera, or the like may be provided.
  • the computer means an information processing device having a control unit, a storage device, etc.
  • the virtual space providing server 1 and the user terminal 5 are information processing devices each having a control unit, a storage device, etc. included in the concept of
  • FIG. 4 is a schematic diagram of an example of a virtual space 40 generated by the virtual space providing server 1 according to this embodiment.
  • FIG. 5 is a flow chart showing use start processing of the virtual space providing server 1 according to this embodiment.
  • FIG. 6 is a diagram showing an example of an avatar 60 generated by the virtual space providing server 1 according to this embodiment.
  • FIG. 7 is a diagram showing an example of a screen 70 displayed on the user terminal 5 according to this embodiment.
  • FIG. 8 is a flowchart showing operation signal reception processing of the virtual space providing server 1 according to this embodiment.
  • FIG. 9 is a flowchart showing room display change processing of the virtual space providing server 1 according to this embodiment.
  • step S By executing the virtual space providing program 31a of the virtual space providing server 1, the main processing shown in FIG. 3A is started.
  • step S (hereinafter, "step S” is simply referred to as "S") 11 in FIG.
  • S Generate a virtual space in which each room is arranged.
  • the room 41 further has rooms 41a, 41b, . . .
  • the room 42 further has rooms 42a, 42b, . . .
  • the virtual space 40 can have a plurality of rooms 41 and 42, and furthermore, the room 41a and the like can be provided inside the room 41, so that the rooms can have a nested structure.
  • the virtual space 40 also has an entrance area 43 outside the rooms 41 and 42 .
  • the entrance area 43 is, for example, an area where an avatar corresponding to the user is output when the user starts using. Note that the virtual space 40 may set the entrance area 43 as one room.
  • the control unit 10 determines whether or not a usage request has been received from the user terminal 5.
  • FIG. For example, a user who uses the virtual space providing service provided by the virtual space providing server 1 accesses the virtual space providing server 1 using the user terminal 5, and the control unit of the user terminal 5 performs authentication using the user ID. conduct.
  • the control unit 10 may determine that the usage request has been received from the user terminal 5 when the authentication is successful. If a usage request has been received from the user terminal 5 (S12: YES), the control unit 10 shifts the process to S13. On the other hand, if no usage request has been received from the user terminal 5 (S12: NO), the control unit 10 shifts the process to S14. In S13, the control unit 10 performs a usage start process.
  • the control unit 10 acquires the authenticated user ID.
  • the control unit 10 (avatar generation unit 12) refers to the user information storage unit 32 and acquires the authority of the user.
  • the control unit 10 receives designation of the display color of the avatar selected by the user, for example, from an avatar display color designation screen (not shown).
  • the control unit 10 generates an avatar of the specified color.
  • FIG. 6 shows an example of a generated avatar 60.
  • the avatar 60 is a humanoid image consisting of a face 61 and a body 62 .
  • the avatar 60 is designed with a reduced number of polygons because it is displayed using a web browser.
  • Avatar 60 is a 3D image.
  • the control unit 10 places the generated avatar at a predetermined position in the virtual space already generated by the process of S11 of FIG. 3A.
  • the control unit 10 places the avatar in the entrance area.
  • FIG. 7 shows an example of a screen 70 displayed on the user terminal 5.
  • the screen 70 has rooms 72 and 73 arranged on a floor 71 .
  • the user's own avatar 75 is displayed in the front center.
  • Avatars 76 are avatars of other users.
  • Floor 71 corresponds, for example, to virtual space 40 in FIG. 4, and rooms 72 and 73 correspond to rooms 42a and 42b in FIG. Since the room 42 (see FIG. 4) is displayed in a transparent output mode on the screen 70, the image indicating the boundary corresponding to the room 42 is not displayed.
  • the control unit 10 (sound control unit 21 , sound output unit 23 ) transmits sound corresponding to the position of the avatar to the user terminal 5 .
  • the user terminal 5 hears the sound of the space corresponding to the position where the avatar is arranged from the speaker of the user terminal 5 . Therefore, the user can feel as if he/she is in the space.
  • the control unit 10 shifts the processing to S14 in FIG. 3A.
  • the control unit 10 determines whether or not an operation signal has been received from the user terminal 5 . If an operation signal has been received from the user terminal 5 (S14: YES), the control unit 10 shifts the process to S15. On the other hand, if the operation signal has not been received (S14; NO), the control unit 10 shifts the process to S17 in FIG. 3B. In S15, the control unit 10 performs operation signal reception processing.
  • the control unit 10 moves the avatar in response to the operation signal.
  • the control unit 10 (generated image output unit 15 ) transmits to the user terminal 5 generated images in which the avatar is arranged in the virtual space during and after the movement of the avatar.
  • the control unit 10 (sound control unit 21, sound output unit 23) transmits the sound corresponding to the moved position to the user terminal 5.
  • FIG. In S43 the control unit 10 (avatar position determination unit 14) determines whether or not the movement is movement into a room. Based on the positional relationship between the avatar and the room, the control unit 10 can determine whether the movement is for entering the room. If the movement is to enter a room (S43: YES), the control unit 10 shifts the process to S44. On the other hand, if the move is not to enter a room (S43: NO), the control unit 10 shifts the process to S46.
  • the control unit 10 refers to the user information storage unit 32 and the room information storage unit 33 to confirm whether movement into the room is possible. For example, if the authority of the user stored in the user information storage unit 32 is included in the permission authority of the room in the room information storage unit 33, the control unit 10 determines that the user is permitted to move into the room. to decide. On the other hand, for example, if the authority of the user stored in the user information storage unit 32 is not included in the permission authority of the room in the room information storage unit 33, the control unit 10 cannot move into the room. judge there is.
  • the control unit 10 determines whether or not to enter the room based on the confirmation result. When entering the room (S45: YES), the control unit 10 shifts the process to S46. On the other hand, if the room cannot be entered (S45: NO), the control unit 10 shifts the process to S49.
  • the control unit 10 moves the avatar in response to the operation signal.
  • the control unit 10 generated image output unit 15
  • the control unit 10 (sound control unit 21, sound output unit 23) transmits the sound corresponding to the moved position to the user terminal 5.
  • the control unit 10 shifts the process to S16 in FIG. 3A.
  • the control unit 10 (avatar processing unit 13) stops the movement of the avatar. In other words, the avatar cannot enter the destination room according to the operation signal, so it stays at the boundary between the rooms. After that, the control unit 10 shifts the process to S16 in FIG. 3A.
  • the control unit 10 performs room display change processing.
  • room display change processing will be described with reference to FIG.
  • the control unit 10 counts the number of avatars placed in each room.
  • the control unit 10 refers to the room display condition storage unit 34 and determines the display mode of each room based on the total number of avatars.
  • the control unit 10 changes the display mode of each room to the determined display mode. For example, when the number of avatars belonging to a certain room is changed from 9 to 10, the control unit 10 changes the color of the image indicating the boundary of the room to green. If the image indicating the boundary of the room is transparent, a greenish transparent wall is displayed. After that, the control unit 10 shifts the processing to S17 in FIG. 3B.
  • the control unit 10 determines whether voice data has been received from the user terminal 5 . If voice data has been received (S17: YES), the control unit 10 shifts the process to S18. On the other hand, if no audio data has been received (S17: NO), the control unit 10 shifts the process to S19. In S18, the control unit 10 (sound control unit 21, sound output unit 23) transmits the received audio data to the user terminals 5 of users corresponding to other avatars in the same space as the avatar.
  • the control unit 10 determines whether or not a termination request has been received from the user terminal 5 .
  • the control unit 10 determines that the termination request has been accepted by receiving, for example, an instruction to leave the avatar from the virtual space (that is, to log out) from the user terminal 5 . If the end request has been accepted (S19: YES), the control unit 10 shifts the process to S20. On the other hand, if the end request has not been received (S19: NO), the control unit 10 shifts the process to S12 in FIG. 3A.
  • the control unit 10 (virtual space generation unit 11) deletes from the virtual space the avatar of the user for whom the termination request has been received. Through such processing, the user of each user terminal 5 can enter and exit the generated virtual space using an avatar. After that, the control unit 10 shifts the processing to S12 in FIG. 3A.
  • the virtual space can be used for selling various goods.
  • a room can then be, for example, a space used by a single store.
  • a store owner is associated with a room ID of a certain room. Images of articles to be sold are arranged in the room.
  • FIG. 10 is a diagram showing an example of a screen 80 displayed on the user terminal 5 according to this embodiment.
  • Screen 80 shows, for example, a virtual store that conducts a comic market.
  • Screen 80 includes room 81 , wall section 82 , store clerk avatar 83 , product image 84 and customer avatar 85 .
  • the room 81 has a wall portion 82 at a partial boundary.
  • Various animation images are arranged on the wall portion 82 .
  • the store clerk avatar 83 is a substitute for the user who sells the product image 84 .
  • the customer avatar 85 can purchase a desired product image 84 while communicating with the store clerk avatar 83 by voice.
  • the item to be sold may be image data such as the item image 84 for sale.
  • the item for sale may be an actual item instead of an image.
  • the image data is transmitted to the user terminal 5 of the user who purchased the item by making a payment for the item.
  • the delivery of the item is arranged by paying the amount of money for the item.
  • Various online payment methods such as credit cards, electronic money, and virtual currency can be used for payment.
  • the control unit 10 may request a settlement server (not shown) to perform settlement processing.
  • the virtual space can be used to enjoy conversations with idols.
  • the room is a space used by multiple users who want to talk with the idol and the idol.
  • a special room is further provided within the room to create a space shared by only one idol and one user.
  • a user's avatar waiting to talk with an idol waits in the room.
  • permission to enter the special room is granted to each user for each hour.
  • the player is forced to leave the special room. By doing so, the user can enjoy one-on-one conversation with a favorite idol. In this case as well, the conversation with the idol can be billed.
  • a virtual space can be used for a conference with a specific user. If so, assign the room to the user hosting the meeting. Then, the user who hosts the conference sets the authority to allow the conference participants to enter the room. By doing so, conversations and chats can be held in the room, and the room can be used like a normal conference.
  • a generated image in which a room is arranged in a virtual space and an avatar is arranged is output to the user terminal 5 .
  • the room is arranged in the specified position in the virtual space in the specified output mode by referring to the room information storage unit 33 that contains the room position and the output mode. Therefore, it is possible to provide the user with a new-concept virtual space having a room in the virtual space.
  • the room can be used for various purposes because the room can be displayed so as to be seen by the user or hidden so as not to be seen by the user.
  • the room display condition storage unit 34 stores conditions related to the number of avatars and output modes. It can be changed by the number of avatars. Therefore, the degree of congestion in the room can be visually indicated by color.
  • Generated images related to the virtual space of a predetermined range are output to the user terminal 5, and in accordance with the movement of the avatar, among the generated images related to the virtual space of the predetermined range from the position of the avatar, the difference is calculated in units of rooms by the user terminal. 5. Therefore, the transmission of generated image data can be limited to a necessary range, so that the processing load can be reduced. In particular, when displaying a generated image using a Web browser, processing can be lightened by suppressing the amount of data.
  • Sound output is controlled in each space including the virtual space and the room, and the sound of the space at the position of the avatar corresponding to the user is output to the user terminal 5, so that the user feels as if the user is in the virtual space or room. You can hear the sound with a feeling.
  • the voice is received from the user terminal 5, the voice is transmitted to the user terminal 5 of the user associated with the other avatar in the space corresponding to the avatar's position. It can be performed.
  • FIG. 11 is a diagram showing an example of an avatar 60-2 generated by the virtual space providing server according to the modification.
  • the control unit of the user terminal acquires the user's facial image via the camera and transmits the facial image data to the virtual space providing server.
  • the control unit of the virtual space providing server receives the face image data of the user and generates an avatar 60-2 by synthesizing the face image 65 with the face part 61-2.
  • the control unit of the user terminal continuously transmits the user's face image data while using the service, so that the control unit of the virtual space providing server generates an avatar based on the received face image data. Therefore, an avatar whose expression changes in real time can be output in the virtual space.
  • a chat which is a text-based conversation
  • the content of the chat is output to the user terminal only when the user is in the room. Therefore, while in the room, the user can participate in the written conversation that takes place in the room.
  • the virtual space providing server may have a translation function, or may cooperate with an external device having a translation function. Conversation by voice or text in the language of each country can be converted into the language of one's own country by using the translation function, which is convenient.

Abstract

The present invention provides a virtual space provision device, virtual space provision method, and program using a virtual space having a novel concept. A virtual space provision server 1 which is connected to a plurality of user terminals 5 over a communication network N comprises: a virtual space generation unit 11 which determines an output aspect of a room by referencing a room information storage unit 33 which stores room information including information relating to locations and output aspects of rooms disposed in a three-dimensional computer-graphic virtual space and generates a virtual space including a room having the determined output aspect; an avatar generation unit 12 which generates avatars associated with users using the user terminals 5; and a generated image output unit 15 which outputs to the user terminals 5 a generated image in which avatars generated by the avatar generation unit 12 are disposed in the virtual space generated by the virtual space generation unit 11.

Description

仮想空間提供装置、仮想空間提供方法及びプログラムVirtual space providing device, virtual space providing method and program
 本発明は、仮想空間提供装置、仮想空間提供方法及びプログラムに関する。 The present invention relates to a virtual space providing device, a virtual space providing method, and a program.
 従来、複数のユーザ端末間で仮想空間を共有し、ユーザを表したアバター画像を表示させる仕組みがある。一例として、複数のユーザ端末間で仮想空間を共有するシステムにおいて、複数のユーザ端末が、受信した仮想空間情報に基づいて、仮想空間画像及び仮想空間を共有するユーザ端末のユーザ情報としてのアバター画像を生成して表示する技術が開示されている(例えば、特許文献1)。 Conventionally, there is a mechanism for sharing a virtual space between multiple user terminals and displaying an avatar image representing the user. As an example, in a system in which a plurality of user terminals share a virtual space, a plurality of user terminals, based on the received virtual space information, have a virtual space image and an avatar image as user information of the user terminal sharing the virtual space. is disclosed (for example, Patent Document 1).
特許第3859018号公報Japanese Patent No. 3859018
 従来の仮想空間では、1つの大きな空間をユーザに開放し、ユーザの代わりになるアバターをユーザが操作することで、当該空間においてアバターを移動させ、仮想空間内のアバターに対応したユーザ同士で、チャット等のSNSサービスを利用するものであった。このように、仮想空間の構造そのものは、シンプルな作りであった。 In a conventional virtual space, one large space is open to the user, and the user operates an avatar that acts as a substitute for the user to move the avatar in the space, and the users corresponding to the avatar in the virtual space can interact with each other. It used SNS services such as chat. In this way, the structure of the virtual space itself was simple.
 本発明は、新しい概念の仮想空間を使用した仮想空間提供装置、仮想空間提供方法及びプログラムを提供することを目的とする。 The purpose of the present invention is to provide a virtual space providing device, a virtual space providing method, and a program that use a new concept of virtual space.
 本発明は、複数のユーザ端末に対して通信ネットワークを介して接続された仮想空間提供装置であって、三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部と、前記小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定手段と、前記小空間態様決定手段が決定した前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成手段と、各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成手段と、前記仮想空間生成手段が生成した前記三次元仮想空間に、前記オブジェクト生成手段が生成した前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力手段と、を備える、仮想空間提供装置に関する。 The present invention is a virtual space providing device connected to a plurality of user terminals via a communication network, and is capable of providing small space information including information on the position and output mode of a small space arranged in a three-dimensional virtual space. a small space mode determination means for determining an output mode of the small space by referring to the small space information storage section; and the output mode determined by the small space mode determination means. virtual space generating means for generating the three-dimensional virtual space including the space; object generating means for generating an object associated with a user using each user terminal; and the three-dimensional virtual generated by the virtual space generating means The present invention relates to a virtual space providing apparatus comprising generated image output means for outputting to each user terminal a generated image in which the object generated by the object generation means is arranged in space.
 また、仮想空間提供装置において、前記三次元仮想空間は、前記小空間の中に小空間をさらに配置可能な入れ子構造を有し、前記小空間情報記憶部は、前記入れ子構造に関する情報をさらに記憶してもよい。 Further, in the virtual space providing apparatus, the three-dimensional virtual space has a nested structure in which a further small space can be arranged within the small space, and the small space information storage unit further stores information about the nested structure. You may
 また、仮想空間提供装置において、前記小空間態様決定手段は、前記小空間の表示色及び透過性の少なくとも一方を、前記小空間の位置に属する前記オブジェクトの数に応じて異ならせるように決定してもよい。 Further, in the virtual space providing apparatus, the small space mode determining means determines at least one of display color and transparency of the small space to be different according to the number of objects belonging to the position of the small space. may
 また、仮想空間提供装置において、前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの移動に対する操作信号を、前記ユーザ端末から受け付ける操作受付手段と、前記操作受付手段が受け付けた前記操作信号に応じて、前記オブジェクトを移動させるオブジェクト移動手段と、を備えてもよい。 Further, in the virtual space providing apparatus, an operation receiving means for receiving an operation signal for movement of the object corresponding to the user using the user terminal from the user terminal; and object moving means for moving the object.
 また、仮想空間提供装置において、前記ユーザの権限情報を含むユーザ情報を記憶するユーザ情報記憶部を備え、前記小空間情報記憶部は、前記権限情報に対応した前記小空間の入室可否情報をさらに記憶しており、前記オブジェクト移動手段により前記小空間の位置に前記オブジェクトが移動した場合に、前記ユーザ情報記憶部及び前記小空間情報記憶部を参照し、前記オブジェクトの前記小空間の中への移動を許可するか否かを判定する内部移動判定手段を備え、前記オブジェクト移動手段は、前記内部移動判定手段により前記小空間の中への移動を許可した場合に限り、前記オブジェクトを前記小空間の中へ移動させてもよい。 Further, the virtual space providing apparatus includes a user information storage unit that stores user information including authority information of the user, and the small space information storage unit further stores entry permission/prohibition information for the small space corresponding to the authority information. and when the object is moved to the position of the small space by the object moving means, the user information storage unit and the small space information storage unit are referred to to move the object into the small space. internal movement determining means for determining whether or not movement is permitted, and the object moving means moves the object into the small space only when movement into the small space is permitted by the internal movement determining means; You can move it into
 また、仮想空間提供装置において、前記生成画像出力手段は、前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの位置から所定範囲の前記三次元仮想空間に係る前記生成画像を、前記ユーザ端末に出力し、前記オブジェクト移動手段による前記オブジェクトの移動に応じて、移動後の前記オブジェクトの位置から前記所定範囲の前記三次元仮想空間であって、未出力分の前記三次元仮想空間に係る前記生成画像を、前記小空間の単位で前記ユーザ端末に出力してもよい。 Further, in the virtual space providing apparatus, the generated image output means outputs the generated image related to the three-dimensional virtual space within a predetermined range from the position of the object corresponding to the user using the user terminal to the user terminal. outputting, according to the movement of the object by the object moving means, the three-dimensional virtual space within the predetermined range from the position of the object after movement, wherein the three-dimensional virtual space for the unoutput portion is generated; An image may be output to the user terminal in units of the small space.
 また、仮想空間提供装置において、前記三次元仮想空間及び前記小空間を含む空間ごとに音の出力を制御する音制御手段と、前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの位置に対応し、前記音制御手段が制御した前記空間の音を、前記ユーザ端末に出力する音出力手段と、を備えてもよい。 Further, in the virtual space providing apparatus, sound control means for controlling sound output for each space including the three-dimensional virtual space and the small space; and sound output means for outputting the sound of the space controlled by the sound control means to the user terminal.
 また、仮想空間提供装置において、前記ユーザ端末から音声を受け付ける音声受付手段を備え、前記音出力手段は、前記音声受付手段が受け付けた前記音声を、前記オブジェクトの位置に対応した前記空間の他のオブジェクトに係るユーザの前記ユーザ端末に出力してもよい。 Further, the virtual space providing apparatus further includes audio reception means for receiving audio from the user terminal, and the sound output means outputs the audio received by the audio reception means to another object in the space corresponding to the position of the object. It may be output to the user terminal of the user associated with the object.
 また、本発明は、複数のユーザ端末に対して通信ネットワークを介して接続されたコンピュータによる仮想空間提供方法であって、前記コンピュータが、三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定ステップと、前記小空間態様決定ステップにより決定された前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成ステップと、各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成ステップと、前記仮想空間生成ステップにより生成された前記三次元仮想空間に、前記オブジェクト生成ステップにより生成された前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力ステップと、を含む、仮想空間提供方法に関する。 The present invention also provides a method for providing a virtual space by a computer connected to a plurality of user terminals via a communication network, wherein the computer provides a position and an output mode of a small space arranged in a three-dimensional virtual space. a small space mode determination step of determining an output mode of the small space by referring to a small space information storage unit storing small space information including information about the output mode of the output mode determined by the small space mode determination step; a virtual space generating step of generating the three-dimensional virtual space including a small space; an object generating step of generating an object associated with a user using each user terminal; The present invention relates to a virtual space providing method including a generated image output step of outputting, to each user terminal, a generated image in which the object generated by the object generating step is arranged in the original virtual space.
 また、本発明は、複数のユーザ端末に対して通信ネットワークを介して接続されたコンピュータを、三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定手段と、前記小空間態様決定手段が決定した前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成手段と、各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成手段と、前記仮想空間生成手段が生成した前記三次元仮想空間に、前記オブジェクト生成手段が生成した前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力手段と、として機能させるためのプログラムに関する。 In addition, the present invention provides a computer connected to a plurality of user terminals via a communication network as a small space information storing small space information including information on the position and output mode of a small space arranged in a three-dimensional virtual space. a small space mode determining means for determining an output mode of the small space by referring to a space information storage; and generating the three-dimensional virtual space including the small space in the output mode determined by the small space mode determining means. virtual space generation means; object generation means for generating an object associated with a user using each user terminal; The present invention relates to a program for functioning as generated image output means for outputting a generated image in which objects are arranged to each user terminal.
 本発明によれば、新しい概念の仮想空間を使用した仮想空間提供装置、仮想空間提供方法及びプログラムを提供することができる。 According to the present invention, it is possible to provide a virtual space providing device, a virtual space providing method, and a program that use a new conceptual virtual space.
本実施形態に係る仮想空間提供システムの全体概要図及び仮想空間提供サーバの機能ブロック図である。1 is an overall schematic diagram of a virtual space providing system and a functional block diagram of a virtual space providing server according to this embodiment; FIG. 本実施形態に係る仮想空間提供サーバのユーザ情報記憶部の例を示す図である。FIG. 4 is a diagram showing an example of a user information storage unit of the virtual space providing server according to this embodiment; 本実施形態に係る仮想空間提供サーバのルーム表示条件記憶部の例を示す図である。FIG. 4 is a diagram showing an example of a room display condition storage unit of the virtual space providing server according to this embodiment; 本実施形態に係る仮想空間提供サーバのルーム情報記憶部の例を示す図である。4 is a diagram showing an example of a room information storage unit of the virtual space providing server according to this embodiment; FIG. 本実施形態に係る仮想空間提供サーバのメイン処理を示すフローチャートである。4 is a flowchart showing main processing of the virtual space providing server according to the embodiment; 図3Aの続きである。FIG. 3A is a continuation. 本実施形態に係る仮想空間提供サーバが生成する仮想空間の一例に係る概要図である。FIG. 4 is a schematic diagram of an example of a virtual space generated by the virtual space providing server according to the embodiment; 本実施形態に係る仮想空間提供サーバの利用開始処理を示すフローチャートである。6 is a flow chart showing use start processing of the virtual space providing server according to the present embodiment. 本実施形態に係る仮想空間提供サーバが生成するアバターの一例を示す図である。FIG. 4 is a diagram showing an example of an avatar generated by the virtual space providing server according to the embodiment; 本実施形態に係るユーザ端末に表示される画面の一例を示す図である。It is a figure which shows an example of the screen displayed on the user terminal which concerns on this embodiment. 本実施形態に係る仮想空間提供サーバの操作信号受信処理を示すフローチャートである。6 is a flow chart showing operation signal reception processing of the virtual space providing server according to the present embodiment. 本実施形態に係る仮想空間提供サーバのルーム表示変更処理を示すフローチャートである。7 is a flow chart showing room display change processing of the virtual space providing server according to the present embodiment. 本実施形態に係るユーザ端末に表示される画面の一例を示す図である。It is a figure which shows an example of the screen displayed on the user terminal which concerns on this embodiment. 変形形態に係る仮想空間提供サーバが生成するアバターの一例を示す図である。FIG. 10 is a diagram showing an example of an avatar generated by a virtual space providing server according to a modified embodiment; FIG.
 以下、本発明を実施するための形態について、図を参照しながら説明する。なお、これは、あくまでも一例であって、本発明の技術的範囲はこれに限られるものではない。
(実施形態)
<仮想空間提供システム100>
 図1は、本実施形態に係る仮想空間提供システム100の全体概要図及び仮想空間提供サーバ1の機能ブロック図である。
 図2Aから図2Cまでは、本実施形態に係る仮想空間提供サーバ1の記憶部30の例を示す図である。
EMBODIMENT OF THE INVENTION Hereafter, the form for implementing this invention is demonstrated, referring a figure. This is just an example, and the technical scope of the present invention is not limited to this.
(embodiment)
<Virtual space providing system 100>
FIG. 1 is an overall schematic diagram of a virtual space providing system 100 and a functional block diagram of a virtual space providing server 1 according to this embodiment.
2A to 2C are diagrams showing examples of the storage unit 30 of the virtual space providing server 1 according to this embodiment.
 図1に示す仮想空間提供システム100は、仮想空間提供サーバ1(仮想空間提供装置)と、ユーザ端末5とを備えたシステムである。
 仮想空間提供システム100は、ユーザが、ユーザ端末5を利用して、仮想空間提供サーバ1が生成する仮想空間(三次元仮想空間)にアクセスすることで、ユーザに、サーバ上のコミュニケーション空間を提供するシステムである。仮想空間提供システム100が提供する仮想空間では、ユーザが操作する当該ユーザの代わりになるアバター(オブジェクト)が、移動をしたり、他のユーザのアバターとの間でコミュニケーションをとったりする。
A virtual space providing system 100 shown in FIG. 1 is a system comprising a virtual space providing server 1 (virtual space providing device) and user terminals 5 .
The virtual space providing system 100 provides the user with a communication space on the server by accessing the virtual space (three-dimensional virtual space) generated by the virtual space providing server 1 using the user terminal 5. It is a system that In the virtual space provided by the virtual space providing system 100, an avatar (object) that is operated by the user and acts on behalf of the user moves and communicates with avatars of other users.
 ここで、仮想空間提供システム100が提供する仮想空間は、例えば、複数のユーザによるコミュニケーションの場を提供するものであり、どのような目的で用いるものであっても構わない。また、仮想空間は、例えば、空間内の装飾や、空間内に設置された絵画等を閲覧させるためのものであってもよいし、空間内での演奏や、演説等、音を聞かせるためのものであってもよい。
 以下においては、仮想空間提供システム100が提供する仮想空間は、ユーザ同士でコミュニケーションを行うためのものを例に説明する。
Here, the virtual space provided by the virtual space providing system 100 provides, for example, a place for communication by a plurality of users, and may be used for any purpose. In addition, the virtual space may be, for example, for viewing decorations in the space, paintings installed in the space, etc., or for listening to sounds such as performances and speeches in the space. may be of
In the following description, the virtual space provided by the virtual space providing system 100 is used for communication between users as an example.
 仮想空間提供サーバ1とユーザ端末5とは、通信ネットワークNを介して通信可能に接続されている。通信ネットワークNは、例えば、インターネット回線等であり、有線であるか無線であるかを問わない。 The virtual space providing server 1 and the user terminal 5 are connected via a communication network N so as to be communicable. The communication network N is, for example, an Internet line or the like, and may be wired or wireless.
<仮想空間提供サーバ1>
 仮想空間提供サーバ1は、仮想空間を生成し、ユーザ端末5による操作に応じて、アバターによる視点の画面を、ユーザ端末5に送信する。ここで、アバターによる視点は、アバター自身が見ているような視点である一人称視点であってもよいし、自身のアバターの画像を含んだ三人称視点であってもよい。そして、視点は、ユーザが切り替えることができるようにしてもよい。また、仮想空間提供サーバ1は、ユーザ端末5から受信した音声を、他のユーザ端末5に送信する。
<Virtual space providing server 1>
The virtual space providing server 1 generates a virtual space, and transmits the viewpoint screen of the avatar to the user terminal 5 according to the operation of the user terminal 5 . Here, the viewpoint by the avatar may be a first-person viewpoint, which is a viewpoint as seen by the avatar itself, or may be a third-person viewpoint including the image of the avatar itself. Then, the viewpoint may be changed by the user. The virtual space providing server 1 also transmits the voice received from the user terminal 5 to another user terminal 5 .
 仮想空間提供サーバ1は、例えば、Webサーバである。なお、仮想空間提供サーバ1を構成するハードウェアの数に制限はない。必要に応じて、1又は複数で構成してもよい。また、仮想空間提供サーバ1は、例えば、クラウドであってもよい。
 仮想空間提供サーバ1は、制御部10と、記憶部30と、通信IF(インタフェース)39とを備える。
 制御部10は、仮想空間提供サーバ1の全体を制御する中央処理装置(CPU)である。制御部10は、記憶部30に記憶されているオペレーティングシステム(OS)やアプリケーションプログラムを適宜読み出して実行することにより、上述したハードウェアと協働し、各種機能を実行する。
The virtual space providing server 1 is, for example, a web server. There is no limit to the number of pieces of hardware that make up the virtual space providing server 1 . One or more may be configured as necessary. Also, the virtual space providing server 1 may be, for example, a cloud.
The virtual space providing server 1 includes a control unit 10 , a storage unit 30 and a communication IF (interface) 39 .
The control unit 10 is a central processing unit (CPU) that controls the entire virtual space providing server 1 . The control unit 10 reads and executes an operating system (OS) and application programs stored in the storage unit 30 as appropriate, thereby cooperating with the above-described hardware and executing various functions.
 制御部10は、仮想空間生成部11(小空間態様決定手段、仮想空間生成手段)と、アバター生成部12(オブジェクト生成手段)と、アバター処理部13(操作受付手段、オブジェクト移動手段)と、アバター位置判定部14(内部移動判定手段)と、生成画像出力部15(生成画像出力手段)と、音制御部21(音制御手段)と、音声受付部22(音声受付手段)と、音出力部23(音出力手段)とを備える。 The control unit 10 includes a virtual space generation unit 11 (small space mode determination means, virtual space generation means), an avatar generation unit 12 (object generation means), an avatar processing unit 13 (operation reception means, object movement means), Avatar position determination unit 14 (internal movement determination means), generated image output unit 15 (generated image output unit), sound control unit 21 (sound control unit), voice reception unit 22 (voice reception unit), and sound output and a section 23 (sound output means).
 仮想空間生成部11は、ルームと呼ばれる小空間を含む三次元の仮想空間を生成する。ここで、仮想空間は、3DCGで生成される画像データである。仮想空間は、複数のルームを含むものであってよい。また、ルームの中に、さらにルームを含んでもよい。ルームとは、制御内容を設定することができる仮想空間内の空間をいう。仮想空間提供サーバ1は、ルームの単位で制御内容を異なるものにすることができる。 The virtual space generation unit 11 generates a three-dimensional virtual space including small spaces called rooms. Here, the virtual space is image data generated by 3DCG. The virtual space may contain multiple rooms. Further, a room may be included in the room. A room is a space within a virtual space in which control contents can be set. The virtual space providing server 1 can vary the control content for each room.
 仮想空間生成部11は、後述するルーム情報記憶部33(小空間情報記憶部)を参照して、ルームの位置や出力態様を決定する。ここで、出力態様とは、ルームの表示方法に関するものであり、ルームの表示/非表示に関するものの他、ルームを表示させる場合に、ルームの外からルームの中を見えるようにするか、又は見えないようにするか、に関するものを含む。また、ルームを表示させる場合には、ルームの境界を示す画像の表示色に関するものを含む。
 また、仮想空間生成部11は、各ルーム内のアバターの数等に基づいて、ルームの境界を示す画像の表示色を異なるものに変更した仮想空間を生成してもよい。その際、仮想空間生成部11は、後述するルーム表示条件記憶部34を参照して、ルームの境界を示す画像の表示色を決定してもよい。
The virtual space generation unit 11 refers to the room information storage unit 33 (small space information storage unit), which will be described later, to determine the position of the room and the output mode. Here, the output mode relates to the display method of the room, and in addition to the display/non-display of the room, when the room is displayed, the inside of the room can be seen from outside the room. Including what you don't want or about. Also, when displaying a room, the display color of the image indicating the boundary of the room is included.
The virtual space generation unit 11 may also generate a virtual space in which the display color of the image indicating the boundary of the room is changed to a different color based on the number of avatars in each room. At this time, the virtual space generation unit 11 may refer to the room display condition storage unit 34, which will be described later, to determine the display color of the image indicating the boundary of the room.
 アバター生成部12は、ユーザ端末5を使用するユーザに対応付けられたアバターを生成する。アバターは、3DCGで生成された画像データである。ここでは、アバターは、人型をしたものを例に説明する。なお、アバターは、ユーザによる設定で外観の色等を変更できるものであってもよい。
 アバター処理部13は、アバター生成部12が生成したアバターの移動に対する操作信号を、ユーザ端末5から受け付けたことに応じて、操作信号に応じてアバターを移動させる。
 アバター位置判定部14は、アバターの仮想空間内での位置を判定する。アバター位置判定部14は、特に、アバター処理部13によりアバターがルームの位置に移動する場合に、後述するユーザ情報記憶部32及びルーム情報記憶部33を参照して、アバターのルーム内への移動を許可するか否かを判定する。
The avatar generation unit 12 generates an avatar associated with the user using the user terminal 5 . Avatars are image data generated by 3DCG. Here, the avatar will be described as an example of a person-shaped one. It should be noted that the avatar may be one in which the appearance color or the like can be changed by the user's settings.
Upon receiving from the user terminal 5 an operation signal for moving the avatar generated by the avatar generation unit 12, the avatar processing unit 13 moves the avatar according to the operation signal.
The avatar position determining unit 14 determines the position of the avatar within the virtual space. In particular, when the avatar is moved to a room position by the avatar processing unit 13, the avatar position determination unit 14 refers to the user information storage unit 32 and the room information storage unit 33, which will be described later, to determine the movement of the avatar into the room. is permitted or not.
 生成画像出力部15は、仮想空間生成部11が生成した仮想空間に、アバター生成部12が生成したアバターを配置した生成画像を、ユーザ端末5に出力する。
 より具体的には、生成画像出力部15は、所定範囲の仮想空間を、ユーザ端末5に出力する。所定範囲の仮想空間とは、例えば、自身のアバターの位置から所定距離内に有するルームを含む仮想空間をいう。そして、生成画像出力部15は、アバター処理部13によりアバターが移動した場合に、移動後の自身のアバターの位置から所定範囲の仮想空間に有するルームを含み、前の処理で未出力の仮想空間を、ルームの単位でユーザ端末5に出力する。
The generated image output unit 15 outputs to the user terminal 5 a generated image in which the avatar generated by the avatar generation unit 12 is arranged in the virtual space generated by the virtual space generation unit 11 .
More specifically, the generated image output unit 15 outputs a predetermined range of virtual space to the user terminal 5 . A virtual space within a predetermined range is, for example, a virtual space that includes a room within a predetermined distance from the position of the user's own avatar. Then, when the avatar is moved by the avatar processing unit 13, the generated image output unit 15 includes a room in the virtual space within a predetermined range from the position of the avatar after the movement, and the virtual space that has not been output in the previous process. is output to the user terminal 5 in units of rooms.
 音制御部21は、仮想空間やルームを含む空間ごとに、音の出力を制御する。つまり、音制御部21は、例えば、最小の空間の単位で、音の出力を制御する。よって、音制御部21は、アバターが位置する空間の隣の空間の音を、当該空間では聞こえないように制御する。
 音声受付部22は、ユーザ端末5から音声を受け付ける。例えば、ユーザがユーザ端末5に有するマイク(図示せず)に向かって話をすることで、音声受付部22は、ユーザ端末5が送信した音声データを受け付ける。
 音出力部23は、ユーザ端末5のユーザに係るアバターの位置に対応し、音制御部21が制御した空間の音を、当該ユーザのユーザ端末5に出力する。また、音出力部23は、音声受付部22が受け付けたユーザの音声を、当該ユーザに係るアバターの位置に対応した空間の他のアバターに対応するユーザのユーザ端末5に出力する。
The sound control unit 21 controls sound output for each space including a virtual space and a room. That is, the sound control unit 21 controls the output of sound, for example, in units of the smallest space. Therefore, the sound control unit 21 controls the sound of the space next to the space where the avatar is located so that it cannot be heard in the space.
The voice reception unit 22 receives voice from the user terminal 5 . For example, when the user speaks into a microphone (not shown) of the user terminal 5 , the voice reception unit 22 receives voice data transmitted by the user terminal 5 .
The sound output unit 23 outputs the sound of the space corresponding to the position of the avatar of the user of the user terminal 5 and controlled by the sound control unit 21 to the user terminal 5 of the user. Also, the sound output unit 23 outputs the user's voice received by the voice receiving unit 22 to the user terminal 5 of the user corresponding to the other avatar in the space corresponding to the position of the avatar of the user.
 記憶部30は、制御部10が各種の処理を実行するために必要なプログラム、データ等を記憶するためのハードディスク、半導体メモリ素子等の記憶領域である。
 記憶部30は、プログラム記憶部31と、ユーザ情報記憶部32と、ルーム情報記憶部33と、ルーム表示条件記憶部34とを備える。
 プログラム記憶部31は、各種のプログラムを記憶する記憶領域である。プログラム記憶部31は、仮想空間提供サーバ1の制御部10が実行する各種機能を行うための仮想空間提供プログラム31a(プログラム)を記憶している。
The storage unit 30 is a storage area such as a hard disk or a semiconductor memory device for storing programs, data, etc. necessary for the control unit 10 to execute various processes.
Storage unit 30 includes program storage unit 31 , user information storage unit 32 , room information storage unit 33 , and room display condition storage unit 34 .
The program storage unit 31 is a storage area that stores various programs. The program storage unit 31 stores a virtual space providing program 31 a (program) for performing various functions executed by the control unit 10 of the virtual space providing server 1 .
 ユーザ情報記憶部32は、仮想空間を利用するユーザに関する情報を記憶する記憶領域である。図2Aに、ユーザ情報記憶部32の一例を示す。
 図2Aに示すユーザ情報記憶部32は、ユーザID(IDentification)と、ユーザ名と、権限と、ルームID等との各項目を有する。
 ユーザIDは、この仮想空間を利用するユーザを識別する識別情報である。ユーザIDによる認証を経て、ユーザは、当該仮想空間を利用できる。
 ユーザ名は、ユーザの氏名であって、ニックネーム等であってもよい。
 権限は、ユーザのランクを示すものであり、この例では、一般、プレミアム、オーナー等を含む。なお、各権限の横の数字(0~2)は、ランクを示すフラグであり、0は、一般を表し、1は、プレミアムを表し、2は、オーナーを表すものとする。プレミアムとは、例えば、課金をしたユーザ等の特別なユーザを示す。また、オーナーとは、あるルームの所有者を示す。
 ルームIDは、権限がオーナーの場合に、当該オーナーが所有するルームを識別する識別情報である。
The user information storage unit 32 is a storage area that stores information about users who use the virtual space. FIG. 2A shows an example of the user information storage unit 32. As shown in FIG.
The user information storage unit 32 shown in FIG. 2A has items such as user ID (IDentification), user name, authority, room ID, and the like.
A user ID is identification information that identifies a user who uses this virtual space. After being authenticated by the user ID, the user can use the virtual space.
The user name is the name of the user, and may be a nickname or the like.
Authority indicates the user's rank, and includes general, premium, owner, etc. in this example. The numbers (0 to 2) next to each authority are flags indicating ranks, with 0 representing general, 1 representing premium, and 2 representing owner. Premium indicates a special user such as a paying user. Also, the owner indicates the owner of a certain room.
The room ID is identification information for identifying a room owned by the owner when the authority is the owner.
 図2Aに例示するユーザIDがU001のユーザは、一般の権限を有することを示し、ユーザIDがU002のユーザは、プレミアムの権限を有することを示す。なお、プレミアムの権限については、例えば、プレミアムの権限を継続する期間を有するようにしてもよい。
 また、ユーザIDがU003のユーザは、ルームIDがR023のルームについては、オーナーの権限を有し、それ以外のルームでは、一般の権限を有することを示す。
The user whose user ID is U001 illustrated in FIG. 2A has general authority, and the user whose user ID is U002 has premium authority. Note that the premium authority may have, for example, a period during which the premium authority is continued.
In addition, the user with the user ID U003 has the authority of the owner in the room with the room ID R023, and has general authority in the other rooms.
 ルーム情報記憶部33は、仮想空間内の各ルームに関する情報(小空間情報)を記憶する記憶領域である。図2Cに、ルーム情報記憶部33の一例を示す。
 図2Cに示すルーム情報記憶部33は、ルームIDと、位置と、出力態様と、上位層と、許可権限(入室可否情報)等との各項目を有する。
 ルームIDは、ルームを識別する識別情報である。
 位置は、ルームの位置を座標により示す。この例では、ルームの位置は、平面での位置を表すものになっているが、高さ方向の位置を含むものであってもよい。
 出力態様は、ルームの表示態様を示す。出力態様が透明であれば、ルームの境界を示す画像が透明であることを示す。また、出力態様が壁であれば、ルームの境界を壁とする画像を出力することを示す。
 上位層は、ルームの上位層を示す。例えば、ルームIDがR002は、上位層がR001になっているので、ルームIDがR002のルームは、ルームIDがR001のルームの内側に形成されていることを示す。
 許可権限は、ルームへの入室を許可する権限を示す。例えば、ルームIDがR001のルームは、誰でも当該ルームに入ることができるが、ルームIDがR002のルームは、権限が一般のユーザは当該ルームに入ることができない。
The room information storage unit 33 is a storage area that stores information (small space information) about each room in the virtual space. An example of the room information storage unit 33 is shown in FIG. 2C.
The room information storage unit 33 shown in FIG. 2C has items such as a room ID, a position, an output mode, an upper layer, permission authority (room entry permission/prohibition information), and the like.
Room ID is identification information for identifying a room.
The position indicates the position of the room by coordinates. In this example, the position of the room represents the position on the plane, but may also include the position in the height direction.
The output mode indicates the display mode of the room. If the output mode is transparent, it indicates that the image showing the boundary of the room is transparent. Also, if the output mode is a wall, it indicates that an image with the boundary of the room as the wall is to be output.
The upper layer indicates the upper layer of the room. For example, a room with a room ID of R002 has an upper layer of R001, so the room with a room ID of R002 is formed inside the room with a room ID of R001.
The permission authority indicates authority to permit entry to the room. For example, anyone can enter a room with a room ID of R001, but a normal user cannot enter a room with a room ID of R002.
 ルーム表示条件記憶部34は、ルーム内のアバターの数に基づいてルームの表示態様を変更する条件を記憶する記憶領域である。図2Bに、ルーム表示条件記憶部34の一例を示す。
 図2Bに示すルーム表示条件記憶部34は、条件と出力態様とを対応付けて記憶する。
 条件は、ルーム内のアバターの数に関する。
 出力態様は、ルームの境界を示す画像の表示色についての態様を示す。
 図2Bの例では、色によって、アバターの数の大小が分かるようになっている。
The room display condition storage unit 34 is a storage area that stores conditions for changing the display mode of the room based on the number of avatars in the room. FIG. 2B shows an example of the room display condition storage unit 34. As shown in FIG.
The room display condition storage unit 34 shown in FIG. 2B stores conditions and output modes in association with each other.
The condition relates to the number of avatars in the room.
The output mode indicates the mode of the display color of the image showing the boundary of the room.
In the example of FIG. 2B, colors indicate the number of avatars.
 なお、記憶部30の各記憶部について、上記の構成は一例であって、他の記憶部があってもよいし、他の構成であってもよい。また、各記憶部の項目について、上記の項目は一例であって、他の項目があってもよいし、ない項目があってもよい。
 図1の通信IF39は、通信ネットワークNを介して、ユーザ端末5等との間の通信を行うためのインタフェースである。
Note that the above configuration of each storage unit of the storage unit 30 is merely an example, and there may be other storage units or other configurations. Moreover, the above items are only examples of the items in each storage unit, and there may be other items or there may be no items.
The communication IF 39 in FIG. 1 is an interface for communicating with the user terminal 5 or the like via the communication network N. FIG.
<ユーザ端末5>
 ユーザ端末5は、例えば、図1に示すようなノートPC等のパーソナルコンピュータである。ユーザ端末5は、その他、タブレットやスマートフォン等のコンピュータの機能を併せ持った携帯型の端末等であってもよい。
 ユーザ端末5は、図示しないが、少なくとも制御部と、記憶部と、入力部及び出力部(又は、タッチパネルディスプレイ)と、通信IFとを備え、必要に応じて、マイクや、スピーカ等の音声入出力装置やカメラ等を備えてもよい。
<User terminal 5>
The user terminal 5 is, for example, a personal computer such as a notebook PC as shown in FIG. The user terminal 5 may also be a mobile terminal such as a tablet or a smart phone, which also has the functions of a computer.
Although not shown, the user terminal 5 includes at least a control unit, a storage unit, an input unit and an output unit (or a touch panel display), and a communication IF. An output device, a camera, or the like may be provided.
 ここで、コンピュータとは、制御部、記憶装置等を備えた情報処理装置をいい、仮想空間提供サーバ1及びユーザ端末5は、各々制御部、記憶部等を備えた情報処理装置であり、コンピュータの概念に含まれる。 Here, the computer means an information processing device having a control unit, a storage device, etc., and the virtual space providing server 1 and the user terminal 5 are information processing devices each having a control unit, a storage device, etc. included in the concept of
<仮想空間提供システム100の処理>
 次に、仮想空間提供サーバ1と、ユーザ端末5との間で行う処理について、フローチャートに即して説明する。
 図3A及び図3Bは、本実施形態に係る仮想空間提供サーバ1のメイン処理を示すフローチャートである。
 図4は、本実施形態に係る仮想空間提供サーバ1が生成する仮想空間40の一例に係る概要図である。
 図5は、本実施形態に係る仮想空間提供サーバ1の利用開始処理を示すフローチャートである。
 図6は、本実施形態に係る仮想空間提供サーバ1が生成するアバター60の一例を示す図である。
 図7は、本実施形態に係るユーザ端末5に表示される画面70の一例を示す図である。
 図8は、本実施形態に係る仮想空間提供サーバ1の操作信号受信処理を示すフローチャートである。
 図9は、本実施形態に係る仮想空間提供サーバ1のルーム表示変更処理を示すフローチャートである。
<Processing of Virtual Space Providing System 100>
Next, processing performed between the virtual space providing server 1 and the user terminal 5 will be described with reference to a flowchart.
3A and 3B are flowcharts showing the main processing of the virtual space providing server 1 according to this embodiment.
FIG. 4 is a schematic diagram of an example of a virtual space 40 generated by the virtual space providing server 1 according to this embodiment.
FIG. 5 is a flow chart showing use start processing of the virtual space providing server 1 according to this embodiment.
FIG. 6 is a diagram showing an example of an avatar 60 generated by the virtual space providing server 1 according to this embodiment.
FIG. 7 is a diagram showing an example of a screen 70 displayed on the user terminal 5 according to this embodiment.
FIG. 8 is a flowchart showing operation signal reception processing of the virtual space providing server 1 according to this embodiment.
FIG. 9 is a flowchart showing room display change processing of the virtual space providing server 1 according to this embodiment.
 仮想空間提供サーバ1の仮想空間提供プログラム31aが実行されることにより、図3Aに示すメイン処理が開始される。
 図3AのステップS(以下、「ステップS」を単に「S」という。)11において、仮想空間提供サーバ1の制御部10(仮想空間生成部11)は、ルーム情報記憶部33を参照し、各ルームを配置した仮想空間を生成する。
 例えば、図4に示す仮想空間40の概要図では、仮想空間内に2つのルーム41及び42を備える。ルーム41は、ルーム41の内側にルーム41a,41b,・・・をさらに有する。また、ルーム42は、ルーム42の内側にルーム42a,42b,・・・をさらに有する。
By executing the virtual space providing program 31a of the virtual space providing server 1, the main processing shown in FIG. 3A is started.
In step S (hereinafter, "step S" is simply referred to as "S") 11 in FIG. Generate a virtual space in which each room is arranged.
For example, in the schematic diagram of the virtual space 40 shown in FIG. 4, two rooms 41 and 42 are provided in the virtual space. The room 41 further has rooms 41a, 41b, . . . Further, the room 42 further has rooms 42a, 42b, . . .
 このように、仮想空間40は、複数のルーム41及び42を設けることができ、さらに、ルーム41の内部にルーム41a等を設けることができるので、ルームを入れ子構造として有することができる。
 また、仮想空間40は、ルーム41及び42の外側にエントランス領域43を有する。エントランス領域43は、例えば、ユーザが利用を開始する際に、当該ユーザに対応するアバターを出力させる領域である。なお、仮想空間40は、エントランス領域43を1つのルームとして設定してもよい。
In this way, the virtual space 40 can have a plurality of rooms 41 and 42, and furthermore, the room 41a and the like can be provided inside the room 41, so that the rooms can have a nested structure.
The virtual space 40 also has an entrance area 43 outside the rooms 41 and 42 . The entrance area 43 is, for example, an area where an avatar corresponding to the user is output when the user starts using. Note that the virtual space 40 may set the entrance area 43 as one room.
 図3AのS12において、制御部10は、ユーザ端末5から利用要求を受信したか否かを判断する。例えば、仮想空間提供サーバ1が提供する仮想空間提供サービスを使用するユーザが、ユーザ端末5を用いて仮想空間提供サーバ1にアクセスし、ユーザ端末5の制御部が、ユーザIDを用いた認証を行う。制御部10は、認証できた場合に、ユーザ端末5から利用要求を受信したと判断してもよい。ユーザ端末5から利用要求を受信した場合(S12:YES)には、制御部10は、処理をS13に移す。他方、ユーザ端末5から利用要求を受信していない場合(S12:NO)には、制御部10は、処理をS14に移す。
 S13において、制御部10は、利用開始処理を行う。
In S12 of FIG. 3A, the control unit 10 determines whether or not a usage request has been received from the user terminal 5. FIG. For example, a user who uses the virtual space providing service provided by the virtual space providing server 1 accesses the virtual space providing server 1 using the user terminal 5, and the control unit of the user terminal 5 performs authentication using the user ID. conduct. The control unit 10 may determine that the usage request has been received from the user terminal 5 when the authentication is successful. If a usage request has been received from the user terminal 5 (S12: YES), the control unit 10 shifts the process to S13. On the other hand, if no usage request has been received from the user terminal 5 (S12: NO), the control unit 10 shifts the process to S14.
In S13, the control unit 10 performs a usage start process.
 ここで、利用開始処理について、図5に基づき説明する。
 図5のS31において、制御部10は、認証したユーザIDを取得する。
 S32において、制御部10(アバター生成部12)は、ユーザ情報記憶部32を参照し、当該ユーザの権限を取得する。
 S33において、制御部10は、例えば、図示しないアバターの表示色の指定画面から、ユーザが選択したアバターの表示色の指定を受信する。
 S34において、制御部10は、指定色のアバターを生成する。
Here, the usage start processing will be described with reference to FIG.
In S31 of FIG. 5, the control unit 10 acquires the authenticated user ID.
In S32, the control unit 10 (avatar generation unit 12) refers to the user information storage unit 32 and acquires the authority of the user.
In S<b>33 , the control unit 10 receives designation of the display color of the avatar selected by the user, for example, from an avatar display color designation screen (not shown).
At S34, the control unit 10 generates an avatar of the specified color.
 図6は、生成したアバター60の例を示す。
 アバター60は、顔部61と、体部62とからなる人型の画像である。当該実施形態においては、Webブラウザを使用して表示させるものであるため、アバター60は、構成するポリゴン数を少なくしたデザインになっている。アバター60は、3D画像である。
FIG. 6 shows an example of a generated avatar 60. FIG.
The avatar 60 is a humanoid image consisting of a face 61 and a body 62 . In this embodiment, the avatar 60 is designed with a reduced number of polygons because it is displayed using a web browser. Avatar 60 is a 3D image.
 図5のS35において、制御部10は、図3AのS11の処理によって既に生成された仮想空間の所定の位置に、生成したアバターを配置する。所定の位置は、例えば、仮想空間のエントランス領域(図4参照)を指定しておくことで、制御部10は、エントランス領域にアバターを配置する。 In S35 of FIG. 5, the control unit 10 places the generated avatar at a predetermined position in the virtual space already generated by the process of S11 of FIG. 3A. By specifying, for example, the entrance area of the virtual space (see FIG. 4) as the predetermined position, the control unit 10 places the avatar in the entrance area.
 S36において、制御部10(生成画像出力部15)は、仮想空間にアバターを配置した生成画像を、ユーザ端末5に送信する。
 図7は、ユーザ端末5に表示される画面70の例を示す。画面70は、フロア71に、ルーム72及び73等が配置されている。また、自身のアバター75が手前中央に表示されている。アバター76は、他のユーザのアバターである。
 フロア71は、例えば、図4の仮想空間40に該当し、ルーム72及び73は、図4のルーム42a及び42bに該当する。なお、画面70では、ルーム42(図4参照)は、透明な出力態様で示されているため、ルーム42に対応する境界を示す画像が表示されていない。
In S<b>36 , the control unit 10 (generated image output unit 15 ) transmits the generated image in which the avatar is arranged in the virtual space to the user terminal 5 .
FIG. 7 shows an example of a screen 70 displayed on the user terminal 5. As shown in FIG. The screen 70 has rooms 72 and 73 arranged on a floor 71 . Also, the user's own avatar 75 is displayed in the front center. Avatars 76 are avatars of other users.
Floor 71 corresponds, for example, to virtual space 40 in FIG. 4, and rooms 72 and 73 correspond to rooms 42a and 42b in FIG. Since the room 42 (see FIG. 4) is displayed in a transparent output mode on the screen 70, the image indicating the boundary corresponding to the room 42 is not displayed.
 図5のS37において、制御部10(音制御部21、音出力部23)は、アバターの位置に対応する音を、ユーザ端末5に送信する。
 この処理によって、ユーザ端末5には、アバターの配置位置に対応する空間の音が、ユーザ端末5に有するスピーカから聞こえる。よって、ユーザは、あたかも当該空間にいるかのような感覚を得ることができる。
 その後、制御部10は、処理を図3AのS14に移す。
In S<b>37 of FIG. 5 , the control unit 10 (sound control unit 21 , sound output unit 23 ) transmits sound corresponding to the position of the avatar to the user terminal 5 .
By this process, the user terminal 5 hears the sound of the space corresponding to the position where the avatar is arranged from the speaker of the user terminal 5 . Therefore, the user can feel as if he/she is in the space.
After that, the control unit 10 shifts the processing to S14 in FIG. 3A.
 図3AのS14において、制御部10(アバター処理部13)は、ユーザ端末5から操作信号を受信したか否かを判断する。ユーザ端末5から操作信号を受信した場合(S14:YES)には、制御部10は、処理をS15に移す。他方、操作信号を受信していない場合(S14;NO)には、制御部10は、処理を図3BのS17に処理を移す。
 S15において、制御部10は、操作信号受信処理を行う。
In S<b>14 of FIG. 3A , the control unit 10 (avatar processing unit 13 ) determines whether or not an operation signal has been received from the user terminal 5 . If an operation signal has been received from the user terminal 5 (S14: YES), the control unit 10 shifts the process to S15. On the other hand, if the operation signal has not been received (S14; NO), the control unit 10 shifts the process to S17 in FIG. 3B.
In S15, the control unit 10 performs operation signal reception processing.
 ここで、操作信号受信処理について、図8に基づき説明する。
 図8のS41において、制御部10(アバター処理部13)は、操作信号に対応してアバターを移動させる。
 S42において、制御部10(生成画像出力部15)は、アバターの移動中及び移動後における仮想空間にアバターを配置した生成画像を、ユーザ端末5に送信する。また、制御部10(音制御部21、音出力部23)は、移動した位置に対応する音を、ユーザ端末5に送信する。
 S43において、制御部10(アバター位置判定部14)は、当該移動がルームに入る移動であるか否かを判断する。制御部10は、アバターと、ルームとの位置関係に基づいて、ルームに入る移動であるか否かを判断できる。ルームに入る移動である場合(S43:YES)には、制御部10は、処理をS44に移す。他方、ルームに入る移動ではない場合(S43:NO)には、制御部10は、処理をS46に移す。
Here, operation signal reception processing will be described based on FIG.
In S41 of FIG. 8, the control unit 10 (avatar processing unit 13) moves the avatar in response to the operation signal.
In S<b>42 , the control unit 10 (generated image output unit 15 ) transmits to the user terminal 5 generated images in which the avatar is arranged in the virtual space during and after the movement of the avatar. Also, the control unit 10 (sound control unit 21, sound output unit 23) transmits the sound corresponding to the moved position to the user terminal 5. FIG.
In S43, the control unit 10 (avatar position determination unit 14) determines whether or not the movement is movement into a room. Based on the positional relationship between the avatar and the room, the control unit 10 can determine whether the movement is for entering the room. If the movement is to enter a room (S43: YES), the control unit 10 shifts the process to S44. On the other hand, if the move is not to enter a room (S43: NO), the control unit 10 shifts the process to S46.
 S44において、制御部10(アバター位置判定部14)は、ルームに入る移動が可能か否かを、ユーザ情報記憶部32及びルーム情報記憶部33を参照して確認する。例えば、ユーザ情報記憶部32に記憶された該当のユーザの権限が、ルーム情報記憶部33の当該ルームの許可権限に含まれていれば、制御部10は、ルームに入る移動が可能であると判断する。他方、例えば、ユーザ情報記憶部32に記憶された該当のユーザの権限が、ルーム情報記憶部33の当該ルームの許可権限に含まれていなければ、制御部10は、ルームに入る移動が不可であると判断する。
 S45において、制御部10(アバター位置判定部14)は、確認結果に基づいてルームに入れるか否かを判断する。ルームに入れる場合(S45:YES)には、制御部10は、処理をS46に移す。他方、ルームに入れない場合(S45:NO)には、制御部10は、処理をS49に移す。
In S44, the control unit 10 (avatar position determination unit 14) refers to the user information storage unit 32 and the room information storage unit 33 to confirm whether movement into the room is possible. For example, if the authority of the user stored in the user information storage unit 32 is included in the permission authority of the room in the room information storage unit 33, the control unit 10 determines that the user is permitted to move into the room. to decide. On the other hand, for example, if the authority of the user stored in the user information storage unit 32 is not included in the permission authority of the room in the room information storage unit 33, the control unit 10 cannot move into the room. judge there is.
In S45, the control unit 10 (avatar position determination unit 14) determines whether or not to enter the room based on the confirmation result. When entering the room (S45: YES), the control unit 10 shifts the process to S46. On the other hand, if the room cannot be entered (S45: NO), the control unit 10 shifts the process to S49.
 S46において、制御部10(アバター処理部13)は、操作信号に対応してアバターを移動させる。
 S47において、制御部10(生成画像出力部15)は、アバターの移動中及び移動後における仮想空間にアバターを配置した生成画像を、ユーザ端末5に送信する。また、制御部10(音制御部21、音出力部23)は、移動した位置に対応する音を、ユーザ端末5に送信する。その後、制御部10は、処理を図3AのS16に移す。
 他方、S49において、制御部10(アバター処理部13)は、アバターの移動を停止させる。つまり、アバターは、操作信号に応じた移動先のルームには入ることができないため、ルームの境界部分にとどまることになる。その後、制御部10は、処理を図3AのS16に移す。
In S46, the control unit 10 (avatar processing unit 13) moves the avatar in response to the operation signal.
In S<b>47 , the control unit 10 (generated image output unit 15 ) transmits to the user terminal 5 generated images in which the avatar is arranged in the virtual space during and after the movement of the avatar. Also, the control unit 10 (sound control unit 21, sound output unit 23) transmits the sound corresponding to the moved position to the user terminal 5. FIG. After that, the control unit 10 shifts the process to S16 in FIG. 3A.
On the other hand, in S49, the control unit 10 (avatar processing unit 13) stops the movement of the avatar. In other words, the avatar cannot enter the destination room according to the operation signal, so it stays at the boundary between the rooms. After that, the control unit 10 shifts the process to S16 in FIG. 3A.
 図3AのS16において、制御部10は、ルーム表示変更処理を行う。
 ここで、ルーム表示変更処理について、図9に基づき説明する。
 図9のS51において、制御部10は、各ルーム内に配置されたアバターの数を集計する。
 S52において、制御部10は、ルーム表示条件記憶部34を参照し、集計したアバターの数に基づいて、各ルームの表示態様を決定する。
In S16 of FIG. 3A, the control unit 10 performs room display change processing.
Here, room display change processing will be described with reference to FIG.
In S51 of FIG. 9, the control unit 10 counts the number of avatars placed in each room.
In S52, the control unit 10 refers to the room display condition storage unit 34 and determines the display mode of each room based on the total number of avatars.
 S53において、制御部10(仮想空間生成部11)は、決定した表示態様に、各ルームの表示態様を変更する。
 例えば、あるルームに属するアバターの数が9から10に変更した場合、制御部10は、ルームの境界を示す画像を緑色に変更する。なお、ルームの境界を示す画像が透明であった場合には、緑色がかった透けた壁が表示される。
 その後、制御部10は、処理を図3BのS17に移す。
In S53, the control unit 10 (virtual space generation unit 11) changes the display mode of each room to the determined display mode.
For example, when the number of avatars belonging to a certain room is changed from 9 to 10, the control unit 10 changes the color of the image indicating the boundary of the room to green. If the image indicating the boundary of the room is transparent, a greenish transparent wall is displayed.
After that, the control unit 10 shifts the processing to S17 in FIG. 3B.
 図3BのS17において、制御部10(音声受付部22)は、ユーザ端末5から音声データを受信したか否かを判断する。音声データを受信した場合(S17:YES)には、制御部10は、処理をS18に移す。他方、音声データを受信していない場合(S17:NO)には、制御部10は、処理をS19に移す。
 S18において、制御部10(音制御部21、音出力部23)は、受信した音声データを、アバターと同じ空間にいる他のアバターに対応するユーザのユーザ端末5に送信する。
In S<b>17 of FIG. 3B , the control unit 10 (voice reception unit 22 ) determines whether voice data has been received from the user terminal 5 . If voice data has been received (S17: YES), the control unit 10 shifts the process to S18. On the other hand, if no audio data has been received (S17: NO), the control unit 10 shifts the process to S19.
In S18, the control unit 10 (sound control unit 21, sound output unit 23) transmits the received audio data to the user terminals 5 of users corresponding to other avatars in the same space as the avatar.
 S19において、制御部10は、ユーザ端末5から終了要求を受け付けたか否かを判断する。ユーザ端末5から、例えば、アバターの仮想空間からの退場(つまり、ログアウト)の指示を受信することで、制御部10は、終了要求を受け付けたと判断する。終了要求を受け付けた場合(S19:YES)には、制御部10は、処理をS20に移す。他方、終了要求を受け付けていない場合(S19:NO)には、制御部10は、処理を図3AのS12に移す。
 S20において、制御部10(仮想空間生成部11)は、終了要求を受け付けたユーザに係るアバターを、仮想空間から消去する。
 このような処理により、生成された仮想空間に、各ユーザ端末5のユーザは、アバターを使って出入りすることができる。
 その後、制御部10は、処理を図3AのS12に移す。
In S<b>19 , the control unit 10 determines whether or not a termination request has been received from the user terminal 5 . The control unit 10 determines that the termination request has been accepted by receiving, for example, an instruction to leave the avatar from the virtual space (that is, to log out) from the user terminal 5 . If the end request has been accepted (S19: YES), the control unit 10 shifts the process to S20. On the other hand, if the end request has not been received (S19: NO), the control unit 10 shifts the process to S12 in FIG. 3A.
In S20, the control unit 10 (virtual space generation unit 11) deletes from the virtual space the avatar of the user for whom the termination request has been received.
Through such processing, the user of each user terminal 5 can enter and exit the generated virtual space using an avatar.
After that, the control unit 10 shifts the processing to S12 in FIG. 3A.
 次に、仮想空間提供サービスにおける使用例を説明する。
 (使用例1)物品の販売に用いる場合
 様々な物品に係る販売に、当該仮想空間を使用することができる。その場合、ルームは、例えば、1つの商店が使用するスペースにすることができる。店のオーナーは、あるルームのルームIDに対応付けられている。そして、ルーム内には、販売する物品の画像を配置する。
Next, an example of use in a virtual space providing service will be described.
(Usage example 1) When used for selling goods The virtual space can be used for selling various goods. A room can then be, for example, a space used by a single store. A store owner is associated with a room ID of a certain room. Images of articles to be sold are arranged in the room.
 図10は、本実施形態に係るユーザ端末5に表示される画面80の一例を示す図である。
 画面80は、例えば、コミックマーケットを行う仮想店舗を示す。画面80は、ルーム81と、壁部82と、店員アバター83と、販売物画像84と、客アバター85とを含む。
 ルーム81は、一部の境界が壁部82を構成している。壁部82には、様々なアニメ画像が配置されている。店員アバター83は、販売物画像84を販売するユーザの代わりである。客アバター85は、店員アバター83と音声によるコミュニケーションをしながら、好みの販売物画像84を購入することができる。
FIG. 10 is a diagram showing an example of a screen 80 displayed on the user terminal 5 according to this embodiment.
Screen 80 shows, for example, a virtual store that conducts a comic market. Screen 80 includes room 81 , wall section 82 , store clerk avatar 83 , product image 84 and customer avatar 85 .
The room 81 has a wall portion 82 at a partial boundary. Various animation images are arranged on the wall portion 82 . The store clerk avatar 83 is a substitute for the user who sells the product image 84 . The customer avatar 85 can purchase a desired product image 84 while communicating with the store clerk avatar 83 by voice.
 ここで、販売する物品は、販売物画像84のような画像データであってもよい。また、販売する物品は、画像ではなく、実際の物品であってもよい。画像データの場合には、物品に係る金額の決済をすることで、当該画像データを、購入したユーザのユーザ端末5に送信する。また、実際の物品である場合には、物品に係る金額の決済をすることで、当該物品を配送するように手配する。なお、決済には、クレジットカードや、電子マネー、仮想通貨等の種々のオンライン決済手段を使用することができる。その場合には、制御部10は、図示しない決済サーバに決済処理を依頼するようにすればよい。 Here, the item to be sold may be image data such as the item image 84 for sale. Also, the item for sale may be an actual item instead of an image. In the case of image data, the image data is transmitted to the user terminal 5 of the user who purchased the item by making a payment for the item. In addition, if the item is an actual item, the delivery of the item is arranged by paying the amount of money for the item. Various online payment methods such as credit cards, electronic money, and virtual currency can be used for payment. In that case, the control unit 10 may request a settlement server (not shown) to perform settlement processing.
 (使用例2)特定のアバターに係るユーザとのコミュニケーションに用いる場合
 例えば、アイドルとの会話を楽しむ等に、仮想空間を使用することができる。その場合、ルームは、アイドルとの会話をしたい複数のユーザとアイドルとが使用するスペースとする。そして、ルーム内にさらに特設ルームを設けて、1人のアイドルと、1人のユーザとのみが共有するスペースにする。
 アイドルとの会話を待つユーザのアバターは、ルームで待機をする。そして、所定時間(例えば、1分)のみ、アイドルのアバターがいる特設ルームに、順番に入ることができるようにする。その場合には、時間ごとに特設ルームへの入室許可を、各ユーザに対して付与するようにする。そして、所定時間が経過すると、強制的に特設ルームから退場させる。そのようにすれば、ユーザは、好みのアイドルとの1対1での会話を楽しむことができる。この場合にも、アイドルとの会話に課金をすることができる。
(Usage Example 2) When used for communication with a user associated with a specific avatar For example, the virtual space can be used to enjoy conversations with idols. In that case, the room is a space used by multiple users who want to talk with the idol and the idol. A special room is further provided within the room to create a space shared by only one idol and one user.
A user's avatar waiting to talk with an idol waits in the room. Then, only for a predetermined time (for example, 1 minute), it is allowed to enter the special room where the idol avatar is present in order. In that case, permission to enter the special room is granted to each user for each hour. Then, when a predetermined time has passed, the player is forced to leave the special room. By doing so, the user can enjoy one-on-one conversation with a favorite idol. In this case as well, the conversation with the idol can be billed.
 (使用例3)会議スペースとして用いる場合
 特定のユーザとの間での会議に、仮想空間を使用することができる。その場合、ルームを、会議を主催するユーザに割り当てる。そして、会議を主催するユーザは、会議参加者に対して入室を許可するように権限を設定する。
 そのようにすれば、ルーム内で会話やチャットを行って、通常の会議のように用いることができる。
(Usage example 3) When used as a conference space A virtual space can be used for a conference with a specific user. If so, assign the room to the user hosting the meeting. Then, the user who hosts the conference sets the authority to allow the conference participants to enter the room.
By doing so, conversations and chats can be held in the room, and the room can be used like a normal conference.
 このように、本実施形態の仮想空間提供サーバ1によれば、以下のような効果がある。
 (1)仮想空間にルームを配置し、さらにアバターを配置した生成画像を、ユーザ端末5に出力する。そして、ルームは、ルームの位置と出力態様とを含むルーム情報記憶部33を参照して、仮想空間内の指定の位置に、指定の出力態様で配置する。よって、仮想空間内にルームを備えた新しい概念の仮想空間を、ユーザに提供できる。しかも、ルームは、ユーザに見せるように表示させたり、見せないように非表示にさせたりするので、ルームを様々な用途に用いることができる。
Thus, according to the virtual space providing server 1 of this embodiment, the following effects are obtained.
(1) A generated image in which a room is arranged in a virtual space and an avatar is arranged is output to the user terminal 5 . Then, the room is arranged in the specified position in the virtual space in the specified output mode by referring to the room information storage unit 33 that contains the room position and the output mode. Therefore, it is possible to provide the user with a new-concept virtual space having a room in the virtual space. Moreover, the room can be used for various purposes because the room can be displayed so as to be seen by the user or hidden so as not to be seen by the user.
 (2)仮想空間は、ルームの中にさらにルームを有することができるものであるので、様々な制御を、ルーム単位で行うことができて便利である。
 (3)ルーム表示条件記憶部34には、アバターの数に関する条件と、出力態様とを記憶しており、ルーム表示条件記憶部34を参照して、ルームの表示色を、ルームの位置にあるアバターの数によって変えることができる。よって、色によって視覚的に、ルームの混雑度合いを示すことができる。
(2) Since a virtual space can have rooms within rooms, various controls can be conveniently performed on a room-by-room basis.
(3) The room display condition storage unit 34 stores conditions related to the number of avatars and output modes. It can be changed by the number of avatars. Therefore, the degree of congestion in the room can be visually indicated by color.
 (4)アバターの移動に対する操作信号をユーザ端末5から受け付けたことに応じて、アバターの位置を移動させる。よって、ユーザの移動操作にしたがって、自身のアバターを仮想空間内で移動させることができる。
 (5)ユーザの権限情報を含むユーザ情報を記憶したユーザ情報記憶部32と、ルーム情報記憶部33に記憶されたルームの許可権限とを参照し、移動するアバターがルーム内への移動を許可するか否かを判定し、ルーム内の移動を許可した場合に限り、アバターがルーム内に移動(入室)するように制御する。
 よって、権限によってアバターをルーム内に入れるか否かを制御できる。
(4) Move the position of the avatar in response to receiving an operation signal for moving the avatar from the user terminal 5 . Therefore, it is possible to move one's own avatar within the virtual space according to the movement operation of the user.
(5) Referring to the user information storage unit 32 storing user information including the user's authority information and the permission authority of the room stored in the room information storage unit 33, the moving avatar is permitted to move into the room. Only when the movement within the room is permitted, the avatar is controlled to move into the room (enter the room).
Therefore, it is possible to control whether or not the avatar is allowed to enter the room depending on the authority.
 (6)所定範囲の仮想空間に係る生成画像をユーザ端末5に出力し、アバターの移動に応じて、アバターの位置から所定範囲の仮想空間に係る生成画像のうち、差分をルーム単位でユーザ端末5に送信する。よって、生成画像データの送信を、必要な範囲に限定できるので、処理に係る負荷が軽減できる。特に、Webブラウザによる生成画像の表示においては、データ量を抑えることで、処理を軽くできる。 (6) Generated images related to the virtual space of a predetermined range are output to the user terminal 5, and in accordance with the movement of the avatar, among the generated images related to the virtual space of the predetermined range from the position of the avatar, the difference is calculated in units of rooms by the user terminal. 5. Therefore, the transmission of generated image data can be limited to a necessary range, so that the processing load can be reduced. In particular, when displaying a generated image using a Web browser, processing can be lightened by suppressing the amount of data.
 (7)仮想空間及びルームを含む各空間で音の出力を制御し、ユーザに対応するアバターの位置の空間の音を、ユーザ端末5に出力するので、あたかもユーザが仮想空間やルームにいるような感覚で音を聞くことができる。
 (8)ユーザ端末5から音声を受け付けた場合に、当該音声を、アバターの位置に対応した空間の他のアバターに係るユーザのユーザ端末5に送信するので、同じ空間内のアバター同士で会話等を行うことができる。
(7) Sound output is controlled in each space including the virtual space and the room, and the sound of the space at the position of the avatar corresponding to the user is output to the user terminal 5, so that the user feels as if the user is in the virtual space or room. You can hear the sound with a feeling.
(8) When the voice is received from the user terminal 5, the voice is transmitted to the user terminal 5 of the user associated with the other avatar in the space corresponding to the avatar's position. It can be performed.
 以上、本発明の実施形態について説明したが、本発明は上述した実施形態に限定されるものではない。また、実施形態に記載した効果は、本発明から生じる最も好適な効果を列挙したに過ぎず、本発明による効果は、実施形態に記載したものに限定されない。なお、上述した実施形態及び後述する変形形態は、適宜組み合わせて用いることもできるが、詳細な説明は省略する。 Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments. Moreover, the effects described in the embodiments are merely enumerations of the most suitable effects produced by the present invention, and the effects of the present invention are not limited to those described in the embodiments. The above-described embodiments and modifications described later can be used in combination as appropriate, but detailed description thereof will be omitted.
(変形形態)
 (1)本実施形態では、図6のアバター60によって示すように、アバターは、同一の形状のものをするものを例に説明したが、これに限定されない。例えば、アバターの顔に位置する部分に、実際のユーザの顔画像を出力するようにして差異を示してもよい。
 図11は、変形形態に係る仮想空間提供サーバが生成するアバター60-2の一例を示す図である。
 ユーザ端末にカメラを備え、ユーザが当該サービスを使用する場合に、ユーザ端末の制御部は、カメラを介してユーザの顔画像を取得し、仮想空間提供サーバに顔画像データを送信する。仮想空間提供サーバの制御部は、ユーザの顔画像データを受信して、顔画像65を顔部61-2に合成したアバター60-2を生成する。なお、ユーザ端末の制御部が、ユーザの顔画像データを、当該サービスを利用する間に常に送信し続けることで、仮想空間提供サーバの制御部は、受信した顔画像データに基づいてアバターを生成するので、リアルタイムで表情が変わるアバターを仮想空間内に出力させることができる。
(deformed form)
(1) In the present embodiment, as shown by the avatar 60 in FIG. 6, the avatars have the same shape, but the present invention is not limited to this. For example, the difference may be shown by outputting the actual user's face image to the part located on the face of the avatar.
FIG. 11 is a diagram showing an example of an avatar 60-2 generated by the virtual space providing server according to the modification.
When the user terminal is equipped with a camera and the user uses the service, the control unit of the user terminal acquires the user's facial image via the camera and transmits the facial image data to the virtual space providing server. The control unit of the virtual space providing server receives the face image data of the user and generates an avatar 60-2 by synthesizing the face image 65 with the face part 61-2. The control unit of the user terminal continuously transmits the user's face image data while using the service, so that the control unit of the virtual space providing server generates an avatar based on the received face image data. Therefore, an avatar whose expression changes in real time can be output in the virtual space.
 (2)本実施形態では、アバターの数に応じて、ルームの色を異なるようにするものを例に説明したが、これに限定されない。ルームの透過性を変更してもよい。
 また、ルームの表示態様を変更するのは、アバターの数に限定されない。例えば、音量に応じて、ルームの色や透過性を異なるようにしてもよい。
 (3)本実施形態では、各ユーザのアバターを表示するものを例に説明したが、これに限定されない。例えば、全てのアバターを表示することなく、ルーム内のアバターに係るユーザの数に応じて、例えば、自身のアバターの近傍に位置する一部のアバターのみを表示させるようにしてもよい。そのようにすれば、多数のアバターが表示されて視覚的に混雑しているような状態を回避できる。
(2) In the present embodiment, an example in which the color of the room is changed according to the number of avatars has been described, but the present invention is not limited to this. You may change the transparency of the room.
Moreover, the number of avatars is not limited to changing the display mode of the room. For example, the color and transparency of the room may vary depending on the volume.
(3) In the present embodiment, an example of displaying each user's avatar has been described, but the present invention is not limited to this. For example, instead of displaying all avatars, only some avatars positioned near the user's own avatar may be displayed according to the number of users associated with avatars in the room. By doing so, it is possible to avoid a situation in which a large number of avatars are displayed and visually crowded.
 (4)本実施形態では、ルーム内での音声による会話について説明したが、これに限定されない。ルーム内で文字による会話であるチャットをしてもよい。その場合、ルーム内にいるときに限って、チャットの内容をユーザ端末に出力するようにする。よって、ルームにいる間に、ルーム内で行われる文字による会話に参加することができる。
 また、仮想空間提供サーバに翻訳機能を有したり、外部の翻訳機能を有する装置と連携したりしてもよい。各国の言語での音声やテキストによる会話について、翻訳機能を用いれば、自国の言語に変換して出力できるので、便利である。
(4) In the present embodiment, voice conversation within a room has been described, but the present invention is not limited to this. A chat, which is a text-based conversation, may be held in the room. In that case, the content of the chat is output to the user terminal only when the user is in the room. Therefore, while in the room, the user can participate in the written conversation that takes place in the room.
Also, the virtual space providing server may have a translation function, or may cooperate with an external device having a translation function. Conversation by voice or text in the language of each country can be converted into the language of one's own country by using the translation function, which is convenient.
 1  仮想空間提供サーバ
 5  ユーザ端末
 10 制御部
 11 仮想空間生成部
 12 アバター生成部
 13 アバター処理部
 14 アバター位置判定部
 15 生成画像出力部
 21 音制御部
 22 音声受付部
 23 音出力部
 30 記憶部
 31a 仮想空間提供プログラム
 32 ユーザ情報記憶部
 33 ルーム情報記憶部
 34 ルーム表示条件記憶部
 60,60-2 アバター
 100 仮想空間提供システム
 N  通信ネットワーク
1 virtual space providing server 5 user terminal 10 control unit 11 virtual space generation unit 12 avatar generation unit 13 avatar processing unit 14 avatar position determination unit 15 generated image output unit 21 sound control unit 22 voice reception unit 23 sound output unit 30 storage unit 31a Virtual space providing program 32 User information storage unit 33 Room information storage unit 34 Room display condition storage unit 60, 60-2 Avatar 100 Virtual space providing system N Communication network

Claims (10)

  1.  複数のユーザ端末に対して通信ネットワークを介して接続された仮想空間提供装置であって、
     三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部と、
     前記小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定手段と、
     前記小空間態様決定手段が決定した前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成手段と、
     各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成手段と、
     前記仮想空間生成手段が生成した前記三次元仮想空間に、前記オブジェクト生成手段が生成した前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力手段と、
     を備える、仮想空間提供装置。
    A virtual space providing device connected to a plurality of user terminals via a communication network,
    a small space information storage unit that stores small space information including information on the position and output mode of the small space arranged in the three-dimensional virtual space;
    small space mode determination means for determining an output mode of the small space by referring to the small space information storage unit;
    a virtual space generation means for generating the three-dimensional virtual space including the small space in the output mode determined by the small space mode determination means;
    an object generating means for generating an object associated with a user using each user terminal;
    generated image output means for outputting to each user terminal a generated image in which the object generated by the object generation means is arranged in the three-dimensional virtual space generated by the virtual space generation means;
    A virtual space providing device comprising:
  2.  請求項1に記載の仮想空間提供装置において、
     前記三次元仮想空間は、前記小空間の中に小空間をさらに配置可能な入れ子構造を有し、
     前記小空間情報記憶部は、前記入れ子構造に関する情報をさらに記憶する、仮想空間提供装置。
    The virtual space providing device according to claim 1,
    the three-dimensional virtual space has a nested structure in which a small space can be further arranged in the small space;
    The virtual space providing device, wherein the small space information storage unit further stores information about the nested structure.
  3.  請求項1又は請求項2に記載の仮想空間提供装置において、
     前記小空間態様決定手段は、前記小空間の表示色及び透過性の少なくとも一方を、前記小空間の位置に属する前記オブジェクトの数に応じて異ならせるように決定する、仮想空間提供装置。
    In the virtual space providing device according to claim 1 or claim 2,
    The small space mode determining means determines at least one of display color and transparency of the small space to be different according to the number of objects belonging to the position of the small space.
  4.  請求項1から請求項3までのいずれかに記載の仮想空間提供装置において、
     前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの移動に対する操作信号を、前記ユーザ端末から受け付ける操作受付手段と、
     前記操作受付手段が受け付けた前記操作信号に応じて、前記オブジェクトを移動させるオブジェクト移動手段と、
     を備える、仮想空間提供装置。
    In the virtual space providing device according to any one of claims 1 to 3,
    operation receiving means for receiving, from the user terminal, an operation signal for movement of the object corresponding to the user using the user terminal;
    an object moving means for moving the object according to the operation signal received by the operation receiving means;
    A virtual space providing device comprising:
  5.  請求項4に記載の仮想空間提供装置において、
     前記ユーザの権限情報を含むユーザ情報を記憶するユーザ情報記憶部を備え、
     前記小空間情報記憶部は、前記権限情報に対応した前記小空間の入室可否情報をさらに記憶しており、
     前記オブジェクト移動手段により前記小空間の位置に前記オブジェクトが移動した場合に、前記ユーザ情報記憶部及び前記小空間情報記憶部を参照し、前記オブジェクトの前記小空間の中への移動を許可するか否かを判定する内部移動判定手段を備え、
     前記オブジェクト移動手段は、前記内部移動判定手段により前記小空間の中への移動を許可した場合に限り、前記オブジェクトを前記小空間の中へ移動させる、仮想空間提供装置。
    In the virtual space providing device according to claim 4,
    A user information storage unit that stores user information including the user's authority information,
    The small space information storage unit further stores entry permission/prohibition information for the small space corresponding to the authority information,
    When the object is moved to the position of the small space by the object moving means, whether to refer to the user information storage unit and the small space information storage unit and permit the movement of the object into the small space Equipped with internal movement determination means for determining whether
    The virtual space providing device, wherein the object moving means moves the object into the small space only when the movement into the small space is permitted by the internal movement determining means.
  6.  請求項4又は請求項5に記載の仮想空間提供装置において、
     前記生成画像出力手段は、
      前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの位置から所定範囲の前記三次元仮想空間に係る前記生成画像を、前記ユーザ端末に出力し、
      前記オブジェクト移動手段による前記オブジェクトの移動に応じて、移動後の前記オブジェクトの位置から前記所定範囲の前記三次元仮想空間であって、未出力分の前記三次元仮想空間に係る前記生成画像を、前記小空間の単位で前記ユーザ端末に出力する、仮想空間提供装置。
    In the virtual space providing device according to claim 4 or 5,
    The generated image output means is
    outputting to the user terminal the generated image related to the three-dimensional virtual space within a predetermined range from the position of the object corresponding to the user using the user terminal;
    In response to the movement of the object by the object movement means, the generated image related to the three-dimensional virtual space in the predetermined range from the position of the object after movement and in the three-dimensional virtual space that has not been output, A virtual space providing device that outputs to the user terminal in units of the small spaces.
  7.  請求項1から請求項6までのいずれかに記載の仮想空間提供装置において、
     前記三次元仮想空間及び前記小空間を含む空間ごとに音の出力を制御する音制御手段と、
     前記ユーザ端末を使用する前記ユーザに対応する前記オブジェクトの位置に対応し、前記音制御手段が制御した前記空間の音を、前記ユーザ端末に出力する音出力手段と、
     を備える、仮想空間提供装置。
    In the virtual space providing device according to any one of claims 1 to 6,
    sound control means for controlling sound output for each space including the three-dimensional virtual space and the small space;
    sound output means for outputting, to the user terminal, the spatial sound controlled by the sound control means and corresponding to the position of the object corresponding to the user using the user terminal;
    A virtual space providing device comprising:
  8.  請求項7に記載の仮想空間提供装置において、
     前記ユーザ端末から音声を受け付ける音声受付手段を備え、
     前記音出力手段は、前記音声受付手段が受け付けた前記音声を、前記オブジェクトの位置に対応した前記空間の他のオブジェクトに係るユーザの前記ユーザ端末に出力する、仮想空間提供装置。
    In the virtual space providing device according to claim 7,
    Provided with voice reception means for receiving voice from the user terminal,
    The virtual space providing device, wherein the sound output means outputs the sound received by the sound receiving means to the user terminal of a user associated with another object in the space corresponding to the position of the object.
  9.  複数のユーザ端末に対して通信ネットワークを介して接続されたコンピュータによる仮想空間提供方法であって、
     前記コンピュータが、
     三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定ステップと、
     前記小空間態様決定ステップにより決定された前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成ステップと、
     各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成ステップと、
     前記仮想空間生成ステップにより生成された前記三次元仮想空間に、前記オブジェクト生成ステップにより生成された前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力ステップと、
     を含む、仮想空間提供方法。
    A virtual space providing method by a computer connected to a plurality of user terminals via a communication network,
    the computer
    a small space mode determination step of determining the output mode of the small space by referring to a small space information storage unit that stores small space information including information on the position and output mode of the small space arranged in the three-dimensional virtual space;
    a virtual space generating step of generating the three-dimensional virtual space including the small space of the output mode determined by the small space mode determining step;
    an object generation step of generating an object associated with a user using each user terminal;
    a generated image output step of outputting to each user terminal a generated image in which the object generated by the object generating step is arranged in the three-dimensional virtual space generated by the virtual space generating step;
    A method for providing a virtual space, including
  10.  複数のユーザ端末に対して通信ネットワークを介して接続されたコンピュータを、
     三次元仮想空間に配置される小空間の位置及び出力態様に関する情報を含む小空間情報を記憶する小空間情報記憶部を参照して前記小空間の出力態様を決定する小空間態様決定手段と、
     前記小空間態様決定手段が決定した前記出力態様の前記小空間を含む前記三次元仮想空間を生成する仮想空間生成手段と、
     各ユーザ端末を使用するユーザに対応付けられたオブジェクトを生成するオブジェクト生成手段と、
     前記仮想空間生成手段が生成した前記三次元仮想空間に、前記オブジェクト生成手段が生成した前記オブジェクトを配置した生成画像を、各ユーザ端末に出力する生成画像出力手段と、
     として機能させるためのプログラム。
    A computer connected to multiple user terminals via a communication network,
    small space mode determination means for determining the output mode of the small space by referring to a small space information storage unit that stores small space information including information on the position and output mode of the small space arranged in the three-dimensional virtual space;
    a virtual space generation means for generating the three-dimensional virtual space including the small space in the output mode determined by the small space mode determination means;
    an object generating means for generating an object associated with a user using each user terminal;
    generated image output means for outputting to each user terminal a generated image in which the object generated by the object generation means is arranged in the three-dimensional virtual space generated by the virtual space generation means;
    A program to function as
PCT/JP2022/027216 2021-07-14 2022-07-11 Virtual space provision device, virtual space provision method, and program WO2023286727A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021116335A JP2023012716A (en) 2021-07-14 2021-07-14 Virtual space provision device, virtual space provision method, and program
JP2021-116335 2021-07-14

Publications (1)

Publication Number Publication Date
WO2023286727A1 true WO2023286727A1 (en) 2023-01-19

Family

ID=84919316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/027216 WO2023286727A1 (en) 2021-07-14 2022-07-11 Virtual space provision device, virtual space provision method, and program

Country Status (2)

Country Link
JP (1) JP2023012716A (en)
WO (1) WO2023286727A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010092304A (en) * 2008-10-08 2010-04-22 Sony Computer Entertainment Inc Information processing apparatus and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010092304A (en) * 2008-10-08 2010-04-22 Sony Computer Entertainment Inc Information processing apparatus and method

Also Published As

Publication number Publication date
JP2023012716A (en) 2023-01-26

Similar Documents

Publication Publication Date Title
US20220266132A1 (en) Virtual environment for computer game
US20190332400A1 (en) System and method for cross-platform sharing of virtual assistants
KR101540544B1 (en) Message service method using character, user device for performing the method, message application comprising the method
KR101579987B1 (en) Contact center integration into virtual environments
EP3996365A1 (en) Information processing device and program
US11740766B2 (en) Information processing system, information processing method, and computer program
JP2005235142A (en) System and program for measuring degree of intimacy between users
WO2020146018A1 (en) Consent verification
JP2022191286A (en) Information processing system, information processing method, and information processing program
US20050253851A1 (en) System for providing virtual space, a virtual space providing server and a virtual space providing method for advancing communication between users in virtual space
WO2023286727A1 (en) Virtual space provision device, virtual space provision method, and program
WO2023190344A1 (en) Information processing device, information processing method, and program
WO2019073559A1 (en) Information processing device
CN113099257B (en) Network friend-making interaction method and device, terminal equipment and storage medium
JP7068977B2 (en) Methods and programs in online commerce support systems
US20220164825A1 (en) Information processing apparatus and system and non-transitory computer readable medium for outputting information to user terminals
US11533578B2 (en) Virtual environment audio stream delivery
JP7029118B2 (en) Image display method, image display system, and image display program
US20240013495A1 (en) Systems and methods for the interactive rendering of a virtual environment on a user device with limited computational capacity
US20230412766A1 (en) Information processing system, information processing method, and computer program
KR20180107625A (en) Virtual Contents Joint Participation Service System
JP2024047954A (en) PROGRAM AND INFORMATION PROCESSING APPARATUS
JP6446083B2 (en) Video game processing program and video game processing system
Seth Real Time Cross Platform Collaboration Between Virtual Reality & Mixed Reality
KR20220152820A (en) System and method for providing game service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22842071

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE