WO2022209564A1 - 情報処理システム、情報処理方法、情報処理プログラム - Google Patents
情報処理システム、情報処理方法、情報処理プログラム Download PDFInfo
- Publication number
- WO2022209564A1 WO2022209564A1 PCT/JP2022/009184 JP2022009184W WO2022209564A1 WO 2022209564 A1 WO2022209564 A1 WO 2022209564A1 JP 2022009184 W JP2022009184 W JP 2022009184W WO 2022209564 A1 WO2022209564 A1 WO 2022209564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- image
- distribution
- unit
- terminal device
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 42
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000009826 distribution Methods 0.000 claims abstract description 392
- 238000012545 processing Methods 0.000 claims abstract description 157
- 238000001514 detection method Methods 0.000 claims abstract description 62
- 238000003860 storage Methods 0.000 claims abstract description 52
- 238000000034 method Methods 0.000 claims description 66
- 230000008569 process Effects 0.000 claims description 63
- 230000033001 locomotion Effects 0.000 claims description 47
- 230000006870 function Effects 0.000 claims description 44
- 238000009877 rendering Methods 0.000 claims description 25
- 230000007704 transition Effects 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 9
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000004891 communication Methods 0.000 description 36
- 238000010586 diagram Methods 0.000 description 34
- 230000004044 response Effects 0.000 description 14
- 238000004519 manufacturing process Methods 0.000 description 8
- 230000014509 gene expression Effects 0.000 description 5
- 210000003128 head Anatomy 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 201000003152 motion sickness Diseases 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 240000004050 Pentaglottis sempervirens Species 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 235000019640 taste Nutrition 0.000 description 2
- 208000012886 Vertigo Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000003796 beauty Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000009987 spinning Methods 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4781—Games
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
Definitions
- the present disclosure relates to an information processing system, an information processing method, and an information processing program.
- “Mirrativ” has been known as a service for distributing music and videos to multiple users (see, for example, Non-Patent Document 1).
- an object of the present disclosure is to provide suitable content distribution technology in virtual space.
- a first detection unit that detects a state of a first user who distributes content including an image of a first virtual space; a first drawing processing unit that draws an image of the first virtual space that is visible to the first user or the second user via a second terminal device associated with the second user; and a first storage unit for storing the first display medium, wherein the image in the first virtual space has a plurality of hierarchies, and the first rendering processing unit stores the first display medium in the plurality of hierarchies, a first rendering unit that renders a first image area of a layer related to the first display medium; and a second rendering unit that renders a second image area of a layer related to a first user interface among the plurality of layers. wherein the first rendering unit changes the state of the first display medium in the first image area based on the state of the first user detected by the first detection unit, wherein the information processing system is disclosed.
- suitable content distribution technology is obtained in virtual space.
- FIG. 1 is a block diagram of a virtual reality generation system according to this embodiment
- FIG. FIG. 4 is an explanatory diagram of images visually recognized by left and right eyes, respectively
- FIG. 4 is an explanatory diagram of the structure of an image in virtual space according to the present embodiment
- FIG. 4 is an explanatory diagram of a virtual space corresponding to a home screen (lobby screen)
- FIG. 4 is an explanatory diagram of a hierarchical structure of home images
- 6 is an enlarged view of the field of view range from the viewpoint of the user in FIG. 5 (when viewing the front);
- FIG. FIG. 4 is a diagram showing an example of a home image viewed from the wearable device; It is explanatory drawing of the aspect which replaces the front and back of an operation area
- FIG. 11 is a schematic side view of a virtual space corresponding to a distributed image
- FIG. 4 is a schematic top plan view of the virtual space corresponding to the distribution image
- FIG. 4 is an explanatory diagram of a hierarchical structure of distribution images
- FIG. 4 is an explanatory diagram showing the relationship between a mirror and each layer of a distribution image in a top view
- FIG. 4 is an explanatory diagram of an interface display layer related to a distribution image
- FIG. 10 is a diagram showing an example of a distribution image viewed by a distribution user via the wearable device;
- FIG. 11 is a schematic side view of a virtual space corresponding to a distributed image
- FIG. 4 is a schematic top plan view of the virtual space corresponding to the distribution image
- FIG. 4 is an explanatory diagram of a hierarchical structure of distribution images
- FIG. 4 is an explanatory diagram showing the relationship between a mirror and each layer of a distribution image in a top view
- FIG. 4 is an explanatory
- FIG. 4 is a diagram showing an example of a distributed image viewed by a viewing user via the wearable device; It is a figure which shows an example of the distribution image for smart phones.
- 3 is a schematic block diagram showing functions of a terminal device on the content delivery side;
- FIG. 4 is an explanatory diagram of an example of drawing information of an avatar;
- 2 is a schematic block diagram showing functions of a terminal device on the content viewing side;
- FIG. 3 is a schematic block diagram showing functions of a server device;
- FIG. FIG. 4 is a flow chart showing an example of operations up to the start of delivery of specific content;
- FIG. 4 is a flow diagram showing an example of operations up to the start of viewing specific content.
- FIG. 10 is a flow chart showing an example of operations during distribution of specific content by a distribution user (that is, viewing of specific content by a viewing user);
- FIG. 1 is a block diagram of a virtual reality generation system 1 according to this embodiment.
- the virtual reality generation system 1 includes a server device 10 (an example of an external processing device) and one or more terminal devices 20. Although three terminal devices 20 are illustrated in FIG. 1 for the sake of simplicity, the number of terminal devices 20 may be two or more.
- the server device 10 is, for example, an information processing system such as a server managed by an operator who provides one or more virtual realities.
- the terminal device 20 is a device used by a user, such as a mobile phone, a smart phone, a tablet terminal, a PC (Personal Computer), a wearable device (a head-mounted display, a glasses-type device, etc.), or a game device.
- a plurality of terminal devices 20 can be connected to the server device 10 via the network 3, typically in a different manner for each user.
- the terminal device 20 can execute the virtual reality application according to this embodiment.
- the virtual reality application may be received by the terminal device 20 from the server device 10 or a predetermined application distribution server via the network 3, or may be a storage device provided in the terminal device 20 or a memory card readable by the terminal device 20. may be stored in advance in a storage medium such as
- the server device 10 and the terminal device 20 are communicably connected via the network 3 .
- the server device 10 and the terminal device 20 cooperate to execute various processes related to virtual reality.
- the terminal device 20 includes a content delivery side terminal device 20A (an example of a first terminal device) and a content viewing side terminal device 20B (an example of a second terminal device).
- the terminal device 20A on the content delivery side and the terminal device 20B on the content viewing side are described as separate terminal devices, but the terminal device 20A on the content delivery side is the terminal device on the content viewing side. It may be the device 20B or vice versa. Note that hereinafter, when the terminal device 20A and the terminal device 20B are not particularly distinguished, they may simply be referred to as the “terminal device 20”.
- Each terminal device 20 is communicably connected to each other via the server device 10 .
- “one terminal device 20 transmits information to another terminal device 20” means “one terminal device 20 transmits information to another terminal device 20 via the server device 10".
- “one terminal device 20 receives information from another terminal device 20” means “one terminal device 20 receives information from another terminal device 20 via the server device 10”.
- each terminal device 20 may be communicably connected without going through the server device 10 .
- the network 3 may include a wireless communication network, the Internet, a VPN (Virtual Private Network), a WAN (Wide Area Network), a wired network, or any combination thereof.
- VPN Virtual Private Network
- WAN Wide Area Network
- wired network or any combination thereof.
- the virtual reality generation system 1 includes studio units 30A and 30B.
- the studio units 30A and 30B are devices on the content delivery side, like the terminal device 20A on the content delivery side.
- the studio units 30A, 30B may be arranged in studios, rooms, halls, etc. for content production.
- Each studio unit 30 can have the same functions as the terminal device 20A and/or the server device 10 on the content distribution side.
- the terminal device 20A on the content distribution side distributes various contents to the terminal devices 20B on the content viewing side via the server device 10 will be mainly described below.
- the studio units 30A and 30B facing distribution users have the same functions as the terminal device 20A on the content distribution side, so that each content viewing side can Various contents may be distributed to the terminal device 20B.
- the virtual reality generation system 1 may not include the studio units 30A and 30B.
- the virtual reality generation system 1 implements an example of an information processing system, but each element of a specific terminal device 20 (see terminal communication unit 21 to terminal control unit 25 in FIG. 1) is an information processing system. may be implemented, or a plurality of terminal devices 20 may cooperate to implement an example of an information processing system. Further, the server device 10 may implement an example of an information processing system by itself, or the server device 10 and one or more terminal devices 20 may cooperate to implement an example of an information processing system. .
- the virtual reality according to the present embodiment is, for example, education, travel, role-playing, simulation, entertainment such as games and concerts, and the like.
- a virtual reality medium is used.
- the virtual reality according to this embodiment is realized by a three-dimensional virtual space, various virtual reality media appearing in the virtual space, and various contents provided in the virtual space.
- Virtual reality media are electronic data used in virtual reality, and include arbitrary media such as cards, items, points, in-service currency (or currency in virtual reality), tickets, characters, avatars, parameters, and the like.
- the virtual reality medium may be virtual reality related information such as level information, status information, parameter information (physical strength, attack power, etc.) or ability information (skills, abilities, spells, jobs, etc.).
- virtual reality media are electronic data that can be acquired, owned, used, managed, exchanged, combined, enhanced, sold, discarded, or gifted by users within virtual reality. It is not limited to what is specified in the specification.
- the user is a viewing user (an example of a second user) who views various contents in the virtual space, and a distributor avatar M2 (an example of the first display medium), which will be described later, in the virtual space. and a distribution user (an example of a first user) who distributes specific content to be distributed.
- a distribution user can also view specific content by another distribution user, and conversely, a viewing user can also distribute specific content as a distribution user.
- the viewing user is assumed to be the viewing user at that time
- the delivery user is assumed to be the delivery user at that time.
- users when there is no particular distinction between a distribution user and a viewing user, they may simply be referred to as “users”.
- the viewer avatar M1 an example of the second display medium
- the distributor avatar M2 are not particularly distinguished, they may simply be referred to as "avatars.”
- avatar is typically in the form of a character with a frontal orientation, and may be in the form of a person, animal, or similar.
- Avatars can have various appearances (appearances when drawn) by being associated with various avatar items.
- a viewing user and a distribution user basically wear a wearable device, for example, on a part of their head or face, and view the virtual space through the wearable device.
- the wearable device may be a head-mounted display or a glasses-type device.
- the glasses-type device may be so-called AR (Augmented Reality) glasses or MR (Mixed Reality) glasses.
- the wearable device may be separate from the terminal device 20, or may implement some or all of the functions of the terminal device 20.
- the specific content by the distribution user is the distributor avatar M2 related to the distribution user, and the distributor avatar M2 that changes its orientation, position, movement, etc. according to the orientation, position, movement, etc. of the distribution user is displayed in the virtual space. It shall refer to the content that appears.
- the orientation, position, and movement of the distribution user are not limited to the orientation, position, and movement of part or all of the body such as the face and hands of the distribution user, but also the direction, position, and movement of the distribution user's line of sight, or the like. It is a concept that includes
- the specified content by the distribution user is typically moving image content.
- Specific content by the distribution user typically provides entertainment in any manner via the distributor avatar M2.
- specific content by the broadcast user may relate to various performances such as dancing, music, magic, etc., chats, meetings, gatherings, conferences, or the like.
- the specific content by the distribution user may be educational.
- the specific content by the distribution user may include guidance, advice, etc. from the distribution user via the distributor avatar M2.
- content provided in virtual reality for dance lessons may include guidance and advice from a dance teacher.
- the dance teacher becomes the distribution user, the students become the viewing users, and the students can receive individual guidance from the teacher in virtual reality.
- specific content by distribution users may include a form of collaboration (hereinafter abbreviated as "collaboration") by two or more distribution users. This enables distribution in a variety of modes and promotes exchanges between distribution users.
- the server device 10 can also distribute content other than the specific content by the distribution user.
- the type and number of contents (contents provided in virtual reality) provided by the server device 10 are arbitrary. May contain digital content.
- the video may be real-time video or non-real-time video.
- the video may be a video based on a real image, or may be a video based on CG (Computer Graphics).
- the video may be a video for providing information.
- the video is related to information provision services of a specific genre (information provision services related to travel, housing, food, fashion, health, beauty, etc.), broadcast services by specific users (e.g. YouTube (registered trademark)), etc. can be
- content is provided in virtual reality, and may be provided in ways other than those provided using the display function of a head-mounted display.
- the content may be provided by rendering the video on the display of a display device (virtual reality medium) in the virtual space.
- the display device in the virtual space may take any form, and may be a screen installed in the virtual space, a large screen display installed in the virtual space, a display of a mobile terminal in the virtual space, or the like. .
- content in virtual reality may be viewable by methods other than via a head-mounted display.
- content in virtual reality may be viewed directly (not via a head-mounted display) via a smartphone, tablet, or the like.
- the configuration of the server device 10 will be specifically described.
- the server device 10 is configured by a server computer.
- the server device 10 may be implemented in cooperation with a plurality of server computers.
- the server device 10 may be implemented in cooperation with a server computer that provides various contents, a server computer that implements various authentication servers, or the like.
- the server device 10 may include a web server.
- some of the functions of the terminal device 20, which will be described later, may be realized by the browser processing the HTML document received from the web server and various programs (Javascript) attached thereto.
- the server device 10 includes a server communication unit 11, a server storage unit 12, and a server control unit 13, as shown in FIG.
- the server communication unit 11 includes an interface that communicates with an external device wirelessly or by wire to transmit and receive information.
- the server communication unit 11 may include, for example, a wireless LAN (Local Area Network) communication module or a wired LAN communication module.
- the server communication unit 11 can transmit and receive information to and from the terminal device 20 via the network 3 .
- the server storage unit 12 is, for example, a storage device, and stores various information and programs necessary for various processes related to virtual reality.
- the server control unit 13 may include a dedicated microprocessor, a CPU (Central Processing Unit) that implements a specific function by reading a specific program, or a GPU (Graphics Processing Unit). For example, the server control unit 13 cooperates with the terminal device 20 to execute the virtual reality application according to the user's operation on the display unit 23 of the terminal device 20 .
- a CPU Central Processing Unit
- GPU Graphics Processing Unit
- the terminal device 20 includes a terminal communication section 21 , a terminal storage section 22 , a display section 23 , an input section 24 and a terminal control section 25 .
- the terminal communication unit 21 includes an interface that communicates with an external device wirelessly or by wire and transmits and receives information.
- the terminal communication unit 21 is, for example, LTE (Long Term Evolution) (registered trademark), LTE-A (LTE-Advanced), fifth generation mobile communication system, UMB (Ultra Mobile Broadband), etc.
- a communication module, a wireless LAN communication module, a wired LAN communication module, or the like may be included.
- the terminal communication unit 21 can transmit and receive information to and from the server device 10 via the network 3 .
- the terminal storage unit 22 includes, for example, a primary storage device and a secondary storage device.
- the terminal storage unit 22 may include semiconductor memory, magnetic memory, optical memory, or the like.
- the terminal storage unit 22 stores various information and programs received from the server device 10 and used for virtual reality processing.
- Information and programs used for virtual reality processing may be acquired from an external device via the terminal communication unit 21 .
- a virtual reality application program may be obtained from a predetermined application distribution server.
- the application program will also simply be referred to as an application.
- the terminal storage unit 22 also stores data for drawing a virtual space, such as an image of an indoor space such as a building or an outdoor space.
- a plurality of types of data for drawing the virtual space may be prepared for each virtual space and used separately.
- the terminal storage unit 22 also stores various images (texture images) for projection (texture mapping) onto various objects placed in the three-dimensional virtual space.
- the terminal storage unit 22 stores drawing information of the viewer avatar M1 as a virtual reality medium associated with each user.
- the viewer avatar M1 is drawn in the virtual space based on the drawing information of the viewer avatar M1.
- the terminal storage unit 22 stores drawing information of the distributor avatar M2 as a virtual reality medium associated with each distribution user.
- the distributor avatar M2 is drawn in the virtual space based on the drawing information of the distributor avatar M2.
- the terminal storage unit 22 stores drawing information related to various objects different from the viewer avatar M1 and the distributor avatar M2, such as various gift objects, buildings, walls, NPCs (Non Player Characters), and the like.
- Various objects are drawn in the virtual space based on such drawing information.
- a gift object is an object corresponding to a gift from one user to another user, and is part of an item.
- Gift objects include items worn by avatars (clothes and accessories), items that decorate distributed images (fireworks, flowers, etc.), backgrounds (wallpapers) or the like, and tickets that can be used to spin gacha (lottery) or other such items. can be of the type
- the display unit 23 includes a display device such as a liquid crystal display or an organic EL (Electro-Luminescence) display.
- the display unit 23 can display various images.
- the display unit 23 is configured by, for example, a touch panel, and functions as an interface that detects various user operations.
- the display unit 23 may be built in the head mounted display as described above.
- the input unit 24 may include physical keys, and may further include any input interface including a pointing device such as a mouse.
- the input unit 24 may be capable of receiving non-contact user input such as voice input, gesture input, and line-of-sight input.
- Gesture input includes sensors for detecting various user states (image sensors, acceleration sensors, distance sensors, etc.), dedicated motion capture that integrates sensor technology and cameras, controllers such as joypads, etc.) may be used.
- the line-of-sight detection camera may be arranged in the head-mounted display.
- the user's various states are, for example, the user's orientation, position, movement, or the like. In this case, the user's orientation, position, and movement refer to the orientation, position, position, It is a concept that includes not only movement but also the direction, position, movement, or the like of the distribution user's line of sight.
- the terminal control unit 25 includes one or more processors.
- the terminal control unit 25 controls the operation of the terminal device 20 as a whole.
- the terminal control unit 25 transmits and receives information via the terminal communication unit 21.
- the terminal control unit 25 receives various information and programs used for various processes related to virtual reality from at least one of the server device 10 and other external servers.
- the terminal control section 25 stores the received information and program in the terminal storage section 22 .
- the terminal storage unit 22 may store a browser (Internet browser) for connecting to a web server.
- the terminal control unit 25 activates the virtual reality application according to the user's operation.
- the terminal control unit 25 cooperates with the server device 10 to execute various processes related to virtual reality.
- the terminal control unit 25 causes the display unit 23 to display an image of the virtual space.
- a GUI Graphic User Interface
- detects user operations for example, may be displayed on the screen.
- the terminal control unit 25 can detect user operations via the input unit 24 .
- the terminal control unit 25 can detect various operations (operations corresponding to tap operations, long tap operations, flick operations, swipe operations, etc.) by gestures of the user.
- the terminal control unit 25 transmits operation information to the server device 10 .
- the terminal control unit 25 draws the distributor avatar M2 and the like together with the virtual space (image), and causes the display unit 23 to display it.
- a stereoscopic image may be generated by generating images G200 and G201 viewed with the left and right eyes, respectively.
- FIG. 2 schematically shows images G200 and G201 visually recognized by the left and right eyes, respectively.
- the virtual space image refers to the entire image represented by the images G200 and G201.
- the terminal control unit 25 realizes various movements of the distributor avatar M2 in the virtual space, for example, according to various operations by the distributor user. A specific drawing process of the terminal control unit 25 will be described later.
- FIG. 3 is an explanatory diagram of the structure of the image in the virtual space according to this embodiment.
- the virtual space image has a plurality of hierarchies 300, as conceptually shown in FIG.
- the terminal control unit 25 draws the image area of each layer and overlays the image area of each layer to generate an image of the virtual space.
- the hierarchy 300 consists of four layers 301 to 304, but the number of layers is arbitrary.
- the plurality of layers includes a distribution avatar display layer J0 in which a distributor avatar M2 is displayed, a user avatar display layer J1 in which a viewer avatar M1 is displayed, an interface display layer J2 in which a user interface is displayed, a background layer J3, and various layers. It may include an auxiliary information display layer J5 on which auxiliary information is displayed, or a combination of any two or more of such layers.
- the front-back relationship (or back) of multiple layers is based on the viewpoint of the user viewing the image in the virtual space. Based on the distance along the
- the background layer J3 is the most rear layer, but the order of the other layers may be set as appropriate.
- a plurality of layers related to an image of a certain virtual space includes a distribution avatar display layer J0, an interface display layer J2, a background layer J3, and an auxiliary information display layer J5, the distribution avatar display layer It may include J0, an interface display layer J2, an auxiliary information display layer J5, and a background layer J3.
- the interface display layer J2 is a layer related to the input unit 24 described above, and is capable of accepting non-contact user input.
- the interface display layer J2 is preferably placed within the user's reach (that is, operable distance) in the virtual space. As a result, the user can perform various inputs without moving in the virtual space while wearing the head-mounted display, and can perform various inputs via the interface display layer J2 in a manner in which motion sickness is unlikely to occur. .
- the auxiliary information display layer J5 is preferably arranged behind the interface display layer J2.
- the auxiliary information display layer J5 is arranged at a distance out of reach of the user in the virtual space (out of reach without movement). Thereby, the visibility of the interface display layer J2 can be improved as compared with the case where the auxiliary information display layer J5 is arranged on the front side of the interface display layer J2.
- the auxiliary information displayed in the auxiliary information display layer J5 is arbitrary, but for example, input information from the viewing user of the specific content (that is, the viewing user viewing the specific content), the distribution user and/or other It may include at least one of guidance/notification information to the user and item information including a gift object given to the distribution user.
- the auxiliary information display layer J5 may consist of a plurality of layers in such a manner that different layers are formed for each attribute of auxiliary information.
- the auxiliary information display layer J5 on which gift objects are drawn may be set separately from the other auxiliary information display layers J5.
- the auxiliary information display layer J5 on which the gift object is drawn may be arranged behind the distribution avatar display layer J0 closer to the distribution avatar display layer J0 than the other auxiliary information display layers J5.
- the auxiliary information display layer J5 on which the gift object is drawn may be arranged on the front side of the delivery avatar display layer J0, unlike the other auxiliary information display layers J5.
- the background layer J3 may have the function of bounding the outer boundary (shell) of the virtual space.
- the virtual space may be an infinite space that does not substantially have the background layer J3, or may be a cylindrical or celestial space.
- the server control unit 13 displays an image of the virtual space on the display unit 23 in cooperation with the terminal device 20, and updates the image of the virtual space according to the progress of the virtual reality and the user's operation.
- the drawing process described below is implemented by the terminal device 20, in other embodiments, part or all of the drawing process described below may be implemented by the server control unit 13.
- at least a part of the image of the virtual space displayed on the terminal device 20 is a web display that is displayed on the terminal device 20 based on the data generated by the server device 10, and at least a part of the screen is a web display.
- native display that is displayed by a native application installed in the terminal device 20 may be used.
- FIG. 4 Some examples of virtual space images generated by the virtual reality generation system 1 will be described with reference to FIGS. 4 to 14.
- FIG. 4 is a diagrammatic representation of virtual space images generated by the virtual reality generation system 1
- FIG. 4 is an explanatory diagram of the virtual space corresponding to the home screen (lobby screen).
- FIG. 5 is an explanatory diagram of the hierarchical structure of the home screen image (hereinafter referred to as “home image H1”).
- FIG. 5A is an enlarged view of the viewing range R500 from the user's point of view in FIG. 4 (when viewing the front).
- FIG. 6 is a diagram showing an example of the home image H1 viewed from the head mounted display.
- FIGS. 7A and 7B are explanatory diagrams of how the operation regions are rearranged.
- the home image H1 represents a virtual space (hereinafter referred to as "home space") that serves as an entrance when moving to various virtual spaces.
- FIG. 4 schematically shows users located in the home space.
- an operation area G300 for selecting various contents is arranged.
- the user can view specific content by a desired distribution user, become a distribution user and distribute specific content, or distribute specific content in the form of collaboration with other users (hereinafter referred to as “collaboration”). (also referred to as “distribution”), and various other activities.
- the home image H1 includes an interface display layer J2 (an example of a layer related to the second user interface), an auxiliary information display layer J5, and a background layer J3 in order from the front side.
- the modified example may include other layers as described above, or may omit some layers (for example, the auxiliary information display layer J5 and the background layer J3).
- FIG. 5 shows an example layout of the interface display layer J2, the auxiliary information display layer J5, and the background layer J3, along with a predetermined reference axis I passing through the position of the user (the position of the viewer avatar M1 or the distributor avatar M2). shown visually.
- a top view is a top view with reference to the vertical direction of the virtual space unless otherwise specified.
- the interface display layer J2, the auxiliary information display layer J5, and the background layer J3 are cylinders with different diameters r1, r2, and r3 around a predetermined reference axis I when viewed from above. and each radius r1, r2, r3 corresponds to the distance between the respective layer and the user.
- the diameter r3 related to the background layer J3 may be infinite (infinity).
- the diameter r1 of the interface display layer J2 may be set within the user's reach, as described above.
- the diameter r1 may be adjustable for each user. Thereby, the interface display layer J2 can be formed with a sense of distance according to the length of the user's hand and preference.
- the interface display layer J2 is preferably arranged based on the user's position in the home space. In other words, when the user enters the home space, the user is positioned in a predetermined positional relationship with respect to the interface display layer J2. In this embodiment, the interface display layer J2 is arranged within a reachable range of the user on the spot with reference to the position of the user in the home space. As a result, the user can perform various inputs without moving in the virtual space while wearing the head-mounted display, and can perform various inputs via the interface display layer J2 in a manner in which motion sickness is unlikely to occur. .
- the interface display layer J2 preferably includes a plurality of planar operation areas G300 functioning as the input section 24.
- the multiple planar operation regions G300 can function as selection buttons that can be selected by the user.
- the size and shape of the planar operation area G300 are arbitrary, and may differ according to the attributes of the operation area G300. This makes it easier for the user to identify the operation area G300 for each attribute.
- various processes (an example of various second processes) realized by operating the operation area G300 are arbitrary, but for example, the layout of the user interface (for example, the plurality of planar operation areas G300) is changed.
- processing to move the user to an arbitrary location or a specific location in the virtual space for example, processing to move the distribution user to a location for distribution
- processing to start distribution of various contents processing to start viewing various contents
- End viewing process of various contents voice input process, gift giving process to distribution users, avatar item lottery process, avatar item selection/exchange process, character input process, and any of these processes can be executed. It may include processing for transitioning to states, or any combination thereof.
- the user can realize various operations through the interface display layer J2.
- the plurality of planar operation areas G300 associated with the content viewing start process may be drawn in a manner associated with the content of selection candidates. For example, in each of the operation regions G300, a thumbnail image of a selection candidate content (for example, specific content by a distribution user) or a real-time video may be drawn (see FIG. 4).
- the thumbnails of the collaboration distributions may depict distributor avatars M2 of a plurality of distribution users who perform the collaboration distributions. Thereby, the user can easily grasp what content can be viewed, and can easily select the desired operation area G300 (for example, the operation area G300 related to the desired specific content).
- the plurality of planar operation regions G300 are preferably arranged in multiple rows as shown in FIG. As a result, even if the number of planar operation regions G300 increases due to, for example, an increase in the number of distribution users (and an accompanying increase in the number of specific contents being distributed), Many operation areas can be arranged efficiently.
- the plurality of planar operation regions G300 are arranged in a plurality of rows along the first curved surface 501 (see FIGS. 4 and 5A) around the predetermined reference axis I.
- the predetermined reference axis I is an axis that passes through the user's position and extends in the vertical direction of the home space.
- the first curved surface 501 may have a circular shape that shapes the interface display layer J2, and the center of curvature may be positioned on the predetermined reference axis I.
- the first curved surface 501 may have an elliptical shape or similar shape when viewed from the top of the home space.
- planar operation area G300 is arranged along the first curved surface 501 means that the direction in the plane related to the planar shape of the operation area G300 is the tangential direction of the first curved surface 501 when the home space is viewed from above. may be in a parallel mode. Alternatively, if the radius of curvature of first curved surface 501 is sufficiently small, operation region G300 may be projected onto first curved surface 501 .
- the plurality of planar operation areas G300 may preferably be arranged in a manner grouped by category. For example, in one region along the first curved surface 501, which is the front region viewed from the user's viewpoint, an operation region G300 related to the user's “recommended” category is arranged, and left and right sides of the front region are arranged. may have different categories (for example, "Collaboration Town", “Game”, “Beginner”, “Follow”, etc.). Such arrangement of each operation area G300 may be customized for each user in the same manner as the arrangement of icons of various applications on the screen of the smartphone. In FIG. 4, a region R500 (see FIG. 5) schematically shows a front region viewed from the user's viewpoint.
- the home image H1 is generated based on the front area viewed from the user's viewpoint.
- the position of the front region R500 as viewed from the user's viewpoint may change as the direction of the user's line of sight or the direction of the face changes. As a result, it is possible to improve consistency between changes in the field of view in the real space and changes in the field of view in the virtual space.
- the plurality of planar operation regions G300 are preferably arranged in a manner of forming a plurality of layers in the front and rear.
- the plurality of planar operation regions G300 are arranged in a plurality of rows along a first curved surface 501 around the predetermined reference axis I, and along a second curved surface 502 around the predetermined reference axis I. and a second group arranged in a plurality of columns.
- the second curved surface 502 may be offset behind the first curved surface 501 as shown in FIG.
- a plurality of operation regions G300 of the first group arranged along the first curved surface 501 (hereinafter also referred to as “operation region G300-1" to distinguish from the second group). ), a second group of a plurality of operation regions G300 arranged along the second curved surface 502 (hereinafter also referred to as “operation region G300-2” to distinguish from the first group) overlap each other. relationship.
- the operation area G300-2 of the second group may be partially visible behind the operation area G300-1 of the first group.
- a third curved surface offset further behind the second curved surface may be set to further arrange the planar operation area G300. In this way, any number of two or more planar operation regions G300 may be arranged so as to overlap when viewed from the user's viewpoint.
- the operation area G300-1 in the front area R500 viewed from the user's point of view in the operation area G300-1 is completely rendered with thumbnail images and real-time video,
- imperfect drawing for example, processing such as changing the texture
- the processing load related to drawing it is possible to reduce the processing load related to drawing as a whole.
- the request submitted via the network 3 while reducing the latency , and thus the amount of requests imposed on the network 3, as well as the computational resources used to respond to the requests can be efficiently reduced.
- the data that is likely to be the operation area G300-1 in the front area R500 may be predicted based on the tendency of each user, or determined by machine learning based on artificial intelligence.
- FIG. 5A schematically shows the relationship between the first curved surface 501 and the second curved surface 502 in top view.
- the offset distance between the first curved surface 501 and the second curved surface 502 may be significantly smaller than the distance between layers with different attributes (for example, the distance between the interface display layer J2 and the auxiliary information display layer J5). This makes it easier for the user to understand that they are on the same interface display layer J2, and makes it easier for the user to intuitively understand that a plurality of planar operation areas G300 also exist on the far side.
- the user can change the arrangement of the user interface (for example, the plurality of planar operation regions G300) by a specific operation. you can This makes it possible to arrange user interfaces (for example, a plurality of planar operation regions G300) according to user's tastes and preferences.
- part of the plurality of planar operation regions G300 may function as buttons for changing the arrangement of the plurality of planar operation regions G300.
- the user may move the plurality of planar operation regions G300 left and right as viewed from the user's viewpoint by inputting a gesture of moving the hand left and right.
- FIG. 6 shows a state in which the left operation area G300 (denoted as “G300L” in FIG. 6) is being moved forward from the user's viewpoint by the user's operation. In this manner, the plurality of planar operation regions G300 may be moved left and right along the first curved surface 501.
- FIG. This makes it possible to easily change the operation area G300 located in front of the user's viewpoint among the plurality of planar operation areas G300.
- the plurality of planar operation areas G300 may be moved such that the planar operation area G300 located in the front area changes for each category. As a result, it is possible to maintain the unity for each category, so that it is possible to achieve both visibility and operability.
- part or all of the plurality of planar operation regions G300 are continuously, regularly or irregularly moved left and right along the first curved surface 501 (and/or the second curved surface 502). , or may be moved in a circular manner. Such movement may be appropriately changed based on the setting by the user, or may be realized based on an event that a predetermined movement condition is satisfied.
- the user may be able to switch front and back by performing a predetermined input.
- the operation regions G300-1 arranged along the second curved surface 502 are behind the plurality of operation regions G300-1 of the first group arranged along the first curved surface 501.
- the user selects the first group and the second group by a predetermined input of moving the hand in a predetermined manner. can be replaced. This enables intuitive operation and improves operability.
- the second group of the plurality of operation regions G300-2 arranged along the second curved surface 502 are arranged along the first curved surface 501 as the first group, and are arranged along the first curved surface 501.
- the plurality of operation regions G300-1 of the first group that have been arranged are arranged along the second curved surface 502 as the second group. It should be noted that such a front-rear switching operation may be applied only to the operation area G300 located in front of the user's viewpoint among the plurality of planar operation areas G300. As a result, the processing load can be reduced compared to the case where the entire plurality of planar operation regions G300 are replaced.
- the front-to-rear switching mode is such that the entire plurality of operation regions G300-1 of the first group arranged along the first curved surface 501 are turned backward as they are.
- the upper side of the plurality of operation regions G300-1 of the first group arranged along the first curved surface 501 is arranged from the upper side to the rear side. It may be a mode in which it rotates around and the lower side rotates backward from the lower side.
- the upper side of the plurality of operation regions G300-2 of the second group arranged along the second curved surface 502 rotates forward from the upper side and moves downward. It may be feasible to switch between the first and second groups in such a way that the sides rotate from the bottom to the front. Although two groups have been described here, the same applies to three or more groups (that is, a plurality of planar operation regions G300 overlapping in three or more layers in the front and rear).
- the home image H1 does not include the user avatar display layer J1, but is not limited to this.
- the home image H1 may include the user avatar display layer J1 on the near side (closer to the user's viewpoint) than the interface display layer J2.
- a hand may be drawn on the user avatar display layer J1 of the home image H1, for example, when the user puts his or her hand forward.
- the user can operate the plurality of planar operation regions G300 while watching the movement of the hand, thereby improving operability.
- FIGS. 8A and 8B are explanatory diagrams of the virtual space corresponding to the distribution image, where FIG. 8A is a schematic side view and FIG. 8B is a schematic top view.
- the distribution user is schematically indicated by reference numeral 808, and the position of the corresponding distributor avatar M2 is schematically indicated by a dotted line enclosure.
- FIG. 9 is an explanatory diagram of a hierarchical structure of an image of specific content by a distribution user (hereinafter referred to as "distribution image H2").
- FIG. 10 is an explanatory diagram showing the relationship between the mirror and each layer of the delivery image H2 as viewed from above.
- FIG. 11 is an explanatory diagram of the interface display layer J2 related to the distribution image H20 for the distribution user.
- FIG. 12 is a diagram showing an example of a distribution image H2 viewed by a distribution user via a head-mounted display (an example of a first wearable device).
- FIG. 13 is a diagram showing an example of the delivery image H2 viewed by the viewing user via the head mounted display (an example of the second wearable device).
- FIG. 14 is a diagram showing an example of a distribution image H2 for smart phones. Note that the distributed image H2 shown in FIG. 14 is an image that is visually recognized without going through the head-mounted display, and can be used for similar terminals other than smartphones.
- the distribution image H2 is a virtual space for producing specific content by the distribution user (hereinafter referred to as a "content production space") or a similar virtual space, and is an image related to the virtual space in which the distributor avatar M2 is arranged.
- the content production space may correspond to, for example, a space such as a studio in the real world.
- a distribution user can perform specific content production activities in the content production space.
- the distributed image H2 is a distributed image H20 for the distribution user (see FIG. 12) and a distribution image H21 for the viewing user, which is visible through the head-mounted display (see FIG. 13). ) and a distribution image H22 for the viewing user that can be viewed without going through the head-mounted display (hereinafter also referred to as “smartphone distribution image H22”) (see FIG. 14).
- delivery image H2 corresponds to any of these delivery images, unless otherwise specified.
- the distribution image H2 includes, from the front side, a distribution avatar display layer J0, an interface display layer J2 (an example of a layer related to a first user interface), an auxiliary information display layer J5, and a user avatar display layer J1. , and a background layer J3.
- the modified example may include other layers as described above, or may omit some layers (for example, the auxiliary information display layer J5 and the background layer J3).
- the interface display layer J2 and the auxiliary information display layer J5 may be integrated as appropriate.
- FIG. 9 shows an arrangement example of the delivery avatar display layer J0, the interface display layer J2, the auxiliary information display layer J5, the user avatar display layer J1, and the background layer J3. It is shown in top view with a predetermined reference axis I passing therethrough.
- the distribution avatar display layer J0, the interface display layer J2, the auxiliary information display layer J5, the user avatar display layer J1, and the background layer J3 are arranged along a predetermined reference axis I when viewed from above. They are arranged in such a way as to form vertical planes of different distances d1 to d5 along the direction of the user's line of sight passing through.
- each of the distances d1 to d5 corresponds to the distance between the respective layer and the user.
- the distance d5 related to the background layer J3 may be infinite, or may be a distance corresponding to the farthest plane (far clip plane) from the viewpoint of the virtual camera.
- the interface display layer J2 may be set on the plane closest to the far clip plane of the virtual camera (near clip plane).
- the viewpoint of the distribution user and the virtual camera may be set at the same positions as the face and head objects of the distributor avatar M2, and the light from the virtual camera is directed forward from the viewpoint of the distributor avatar M2. It may have an axis.
- the face of the distributor avatar M2 and the head of the distributor avatar M2 are not drawn from the viewpoint of the distributor avatar M2. movements of hands and arms) may be drawn.
- another user for example, a viewing user
- the distance d2 of the interface display layer J2 may be set within the user's reach, as described above. Also, the distance d2 may be adjustable for each user. Also, the distance d2 may be the same as the diameter r1 (see FIG. 5) described above. Thereby, the interface display layer J2 can be formed with a sense of distance according to the length of the user's hand and preference. Further, when the distance d2 is the same as the radius r1 (see FIG. 5), the same (common) operability can be realized between the distribution image H20 for the distribution user and the home image H1, thereby enhancing convenience. can.
- the content creation space which is a virtual space in which the distributor avatar M2 is active, is optional.
- a closet 801 for use and a table 802 may be arranged.
- the broadcast user can select a desired avatar item from among the avatar items in the closet 801 and prepare (standby) the broadcaster avatar M2.
- An avatar item is an item that is drawn in association with an avatar such as the distributor avatar M2, and may include, for example, hairstyle, clothing, equipment, skin color, and the like.
- a mirror 805 is arranged in the distribution room 800 .
- Mirror 805 has the property of reflecting light (visible light), like a mirror in the real world.
- the image reflected in mirror 805 corresponds to distribution image H20. Therefore, by positioning the distributor avatar M2 corresponding to the user in front of the mirror 805, the distribution user can view the front (mirror 805) and perform distribution corresponding to his or her various states (orientation, position, and movement).
- states of the person's avatar M2 and the corresponding state of the delivery image H20
- the distribution user can face the mirror 805 (facing the front) and direct his/her line of sight to the mirror 805, and the various states (orientation, position, and movement) of himself/herself and the distribution of the distributor avatar M2 associated therewith.
- the state (and accordingly the state of the distribution image H20) can be confirmed.
- the distribution user can confirm his/her own movement while facing the mirror 805 and correct his/her movement appropriately in real time. can be generated to
- FIG. 10 shows the positional relationship among the mirror 805 in the delivery room 800, the delivery user, and each layer of the delivery image H2 in top view.
- an area R502 corresponds to the area R501 shown in FIG.
- the virtual camera in the mirror 805 can perform the same function as the mirror 805 by displaying the captured image on the mirror 805 .
- a distributor avatar is drawn on the distribution avatar display layer J0.
- the distributor avatar M2 is drawn in a state corresponding to various states (orientation, position, and movement) of the distribution user standing in front of the mirror 805.
- FIG. Various states (orientation, position, and movement) of the distribution user can be acquired via the input unit 24 as described above.
- the interface display layer J2 is preferably arranged based on the position of the distribution user standing in front of the mirror 805. In other words, when the distribution user stands in front of the mirror 805, the distribution user is positioned in a predetermined positional relationship with respect to the interface display layer J2. In this embodiment, the interface display layer J2 is placed within reach of the user on the spot, with reference to the position of the distribution user when standing in front of the mirror 805 . Note that the predetermined positional relationship may be changed as appropriate by the user. Thus, it is possible to realize the interface display layer J2 according to the user's taste and physique.
- the interface display layer J2 preferably includes one or more operation areas that function as the input section 24.
- Various control areas can function as selection buttons that can be selected by the broadcast user. For example, a broadcast user may be able to select (operate) a desired selection button via contactless user input.
- various processes realized by operating the operation area are arbitrary, but for example, specific content distribution start processing, specific content distribution end processing, voice input processing, Processing for receiving gifts by distribution users, lottery processing for items (for example, items related to avatars), selection/exchange processing for various items, character input processing, approval processing for other users, parameters for distribution contents (for example, distribution (Various parameters of the image H2), and at least one of a process for transitioning to a state in which these processes can be executed.
- FIG. 11 shows an example of the operation area G100 in the form of various icons in the delivery image H20 for the delivery user.
- various operation areas G100 include an operation area G101 for chatting (comments), an operation area G102 for taking screenshots, an operation area G103 for playing games, and other video content. (including sharing of viewing and distributing users viewing the same moving image), an operation area G105 for spinning a gacha (lottery), and the like.
- the lottery target for the gacha is arbitrary, but may be, for example, an avatar item.
- the operation area in the interface display layer J2 may be displayed all the time, or may be displayed only in specific cases. As a result, it is possible to expand the display area of the distributor avatar M2 and display the entire distributor avatar M2 while enabling operation via the operation area in the interface display layer J2.
- the specific operation area in the interface display layer J2 may be displayed in the same manner as the operation area G300 related to the home image H1 as described above.
- the various operation areas on the interface display layer J2 may be interchangeable back and forth in the manner described above with reference to FIGS. 7A and 7B.
- the operation area in the interface display layer J2 includes an operation area G120 (hereinafter also referred to as "smartphone small window area G120") for adjusting various parameters of the delivery image H2.
- G120 hereinafter also referred to as "smartphone small window area G120"
- a part or all of the delivery image H21 for the viewing user and/or a part or all of the delivery image H22 for the smart phone may be displayed in the smartphone small window region G120.
- Various parameters of the delivery image H2 that can be adjusted via the smartphone small window region G120 are, for example, parameters related to the virtual camera. ), the optical axis direction of the virtual camera, the lateral position of the virtual camera (position relative to the distribution user in the lateral direction intersecting the optical axis direction), zoom (magnification), and the like.
- the delivery user may be able to change the zoom of the image in the smartphone small window region G120 by, for example, pinching out or pinching in a mode of operating the smartphone small window region G120 as if it were a smartphone. , swipe to move the image in the smartphone small window region G120 in parallel.
- the distribution user may be able to change the position of the virtual camera (relative position to himself) by performing an operation to move the position of the smartphone small window region G120.
- the distribution user can control the state of the virtual camera via the smartphone small window area G120 to automatically follow the movement of the distribution user in the position of the virtual camera. It may be possible to switch between a fixed mode that fixes the . In this case, it is possible to carry out mobile distribution in follow-up mode.
- the distribution user can create states that the viewing user does not want to see, such as the AFK (Away From Keyboard) state and the mute state. may be As a result, it is possible to prevent an increase in the processing load due to drawing of unnecessary objects, and improve convenience for the distribution user.
- the distribution user views the distribution image H20 on the smartphone together with the distribution image H20 substantially matching the distribution image H21 when viewed via the head-mounted display.
- Specific content can be produced while confirming the delivery image H22 when the content is delivered.
- a small window for other viewing devices such as a tablet small window may be set. The distribution user may be able to select a desired small window and display it in the distribution image H20.
- the smartphone small window area G120 is displayed adjacent to the mirror 805 reflecting the distributor avatar M2.
- the smartphone small window region G120 may represent the distribution image H22 (see FIG. 14) when displayed on the smartphone.
- the distribution user can perform distribution while confirming in real time the state of the distribution image H22 on the terminal device 20 of the viewing user viewing the image on the smartphone.
- the delivery image H20 may include two concepts: an image being delivered and a preparatory image before delivery.
- an operation area G106 for opening the closet 801 (see FIG. 8A) and changing clothes
- an operation area G107 for starting delivery 11, etc. may be drawn.
- auxiliary information display layer J5 On the auxiliary information display layer J5, character information such as gift items and messages of support from the viewing user of the specific content related to the distribution image H2 may be drawn in real time. As a result, the distribution user can distribute the specific content while enjoying the reaction of the viewing user, interaction with the viewing user, and the like. Also, the distribution name, title, etc. of the specific content may be drawn on the auxiliary information display layer J5. For example, in the example shown in FIG. 12, a delivery name G11 of "lazy chat delivery", a heart-shaped gift object G12, and various comments G13 such as "first time seeing" are drawn.
- the viewer avatar M1 of the viewing user of the specific content related to the distribution image H2 may be drawn on the user avatar display layer J1.
- the distribution user can perform distribution while grasping the number and status of viewing users.
- the viewer avatar M1 drawn on the user avatar display layer J1 may be drawn in a state corresponding to the state (orientation, position, and movement) of the corresponding viewing user. Thereby, the distribution user can perform distribution while grasping the state of the viewing user.
- the distributor avatar M2 drawn on the distribution avatar display layer J0 may be drawn translucent so that the distribution user can easily grasp the state of the user avatar display layer J1. In this case, the distributor user can easily check the state of the user avatar display layer J1 behind the distributor avatar M2. It should be noted that the translucent state and the solid state of the distributor avatar M2 may be switchable by an input by the distributor user. In this case, it becomes easy for the distribution user to selectively confirm the state of the distributor avatar M2 and the state of the user avatar display layer J1 behind the distributor avatar M2.
- the walls of the distribution room 800 may be drawn on the background layer J3.
- the scenery through the wall of the distribution room 800 may be drawn on the background layer J3.
- the viewer avatar M1 of the viewing user of the specific content related to the distribution image H2 may enter a virtual space (hereinafter referred to as "viewing space") different from the content production space.
- the background related to the viewing space may be drawn on the background layer J3.
- the distribution image H21 for the viewing user (see FIG. 13), which is the image of the viewing space, may have a different background layer J3 from the distribution image H20 for the distribution user (see FIG. 12).
- the viewing space may be inconsistent with the virtual space (content creation space) represented by the distribution image H20 for the distribution user.
- the distribution image H21 for the viewing user can be optimized from a different viewpoint from the distribution image H20 for the distribution user.
- the distribution room 800 may have a half mirror (magic mirror) wall 810 on the front side, as schematically shown in FIG. 8B.
- the wall member 810 has a function of reflecting like the mirror 805 so that the inside of the wall 810 can see the inside of the room but cannot see the outside of the room from the inside.
- the distribution user can use the wall 810 in the same manner as the mirror 805 described above to perform distribution.
- the viewing user can view the distribution user (distributor avatar M2) in the distribution room 800 through the wall 810 .
- the background layer J3 of the delivery image H2 may include a drawing of a specially made stage or the like instead of the inside of the delivery room 800.
- FIG. 15 is a schematic block diagram showing the functions of the terminal device 20A on the content delivery side.
- FIG. 16 is an explanatory diagram of an example of avatar drawing information.
- a distribution user refers to one distribution user
- a distribution image H2 refers to a distribution image H2 forming specific content by the one distribution user. Point.
- the terminal device 20A includes a first drawing processing unit 200A, a second drawing processing unit 200B, a first information generation unit 210A, a first communication unit 220A, a first display unit 230A, It includes a first storage section 240A and a first user interface section 250A.
- the functions of the first drawing processing unit 200A, the second drawing processing unit 200B, the first information generation unit 210A, the first communication unit 220A, the first display unit 230A, and the first user interface unit 250A are shown in FIG.
- the terminal controller 25 of the terminal device 20A cooperates with the terminal communication section 21, the terminal storage section 22, the display section 23, and the input section 24 to execute the virtual reality application.
- the first storage unit 240A can be realized by the terminal storage unit 22 of the terminal device 20A shown in FIG.
- the first drawing processing unit 200A draws the distribution image H20 (see FIG. 12) for the distribution user described above (an example of the image of the first virtual space).
- the first drawing processing unit 200A generates a distribution image H20 (see FIG. 12) for distribution users at a predetermined frame rate, for example.
- the first drawing processing unit 200A includes a first drawing unit 201A, a second drawing unit 202A, and a third drawing unit 203A.
- the first drawing unit 201A draws the image area (an example of the first image area) of the distribution avatar display layer J0 in the distribution image H20 for the distribution user (see FIG. 12).
- the distribution avatar display layer J0 in the distribution image H20 is as described above.
- the first drawing unit 201A draws the distributor avatar M2 in the image area of the distribution avatar display layer J0 based on the avatar drawing information (see Table 700 in FIG. 16) stored in the first storage unit 240A, for example. good too.
- an example of avatar drawing information is shown in a table 700 as a sample.
- each avatar ID is associated with a face part ID, a hairstyle part ID, a clothing part ID, and the like.
- Appearance-related parts information such as face part IDs, hairstyle part IDs, and clothing part IDs are parameters that characterize the avatar, and may be selected by each user.
- a plurality of types of information related to appearance such as face part IDs, hairstyle part IDs, clothing part IDs, etc., related to avatars are prepared.
- face part IDs part IDs are prepared for each type of face shape, eyes, mouth, nose, etc., and information related to face part IDs is managed by combining the IDs of the parts that make up the face. may be In this case, not only the server device 10 but also the terminal device 20 can draw each avatar based on each ID related to the appearance linked to each avatar ID.
- the distributor avatar M2 opens the closet 801 (see FIG. 8A) and changes clothes (that is, when the ID related to hairstyle or clothes is changed), the first drawing unit 201A The appearance of the broadcaster avatar M2 is updated.
- the first drawing unit 201A changes the state of the distributor avatar M2 based on the detection result (distribution user state detection result) from the first detection unit 211A (described later) of the first information generation unit 210A. .
- the first drawing unit 201A draws the distributor avatar M2 in such a manner that the state of the distributor facing the mirror 805 (see FIG. 8A) is replaced with the distributor avatar M2.
- the first drawing unit 201A turns the distributor avatar M2 to the left when the distribution user looks to the right, and
- the direction of the distributor avatar M2 may be linked to the direction of the distribution user, such that the distributor avatar M2 faces downward.
- the direction may be only the face, only the body, or a combination thereof.
- the first drawing unit 201A causes the distributor avatar M2 to turn left when the distribution user's line of sight is directed to the right, and
- the direction of the line of sight of the distributor avatar M2 may be linked to the direction of the line of sight of the distribution user in such a manner that the line of sight of the distributor avatar M2 is directed downward when the avatar M2 looks downward.
- various eye movements such as blinking may be interlocked.
- the movements of the nose, mouth, etc. may be interlocked.
- the consistency (linkage) of each part between the distributor avatar M2 and the distribution user is enhanced, and the facial expressions of the distributor avatar M2 can be diversified.
- the first drawing unit 201A causes the distributor avatar M2 to raise its left hand when the distribution user raises his right hand, and when the distribution user raises both hands, the distribution
- the hand movements of the broadcaster avatar M2 may be linked to the hand movements of the broadcast user, such that the broadcaster avatar M2 raises both hands.
- the motion of each part of the hand such as the fingers may also be interlocked.
- other parts such as feet may be linked in the same manner.
- the first drawing unit 201A draws the distribution user to the right.
- the distributor avatar M2 moves to the left, and when the distributor avatar M2 moves away from the mirror 805, the distributor avatar M2 moves away.
- the position of the broadcaster avatar M2 may be changed.
- consistency (interlocking) regarding movement (position) between the distributor avatar M2 and the distribution user is enhanced, and expressions can be diversified by changing the position of the distributor avatar M2.
- the second drawing unit 202A draws the image area (an example of the second image area) of the interface display layer J2 in the distribution image H20 for the distribution user (see FIG. 12).
- the interface display layer J2 in the distribution image H20 is as described above.
- the second drawing unit 202A draws the various operation areas G100 (see FIG. 11) and the smartphone small window area G120 (see FIG. 12) described above.
- the second drawing unit 202A may always draw the various operation regions G100 and the smartphone small window region G120, or may omit drawing part or all of them as appropriate.
- the second drawing unit 202A outputs operation areas G117 and G118 (see FIG. 11) for collaboration approval/non-approval in response to collaboration approval/non-approval instructions from the viewing user information acquisition unit 222A, which will be described later. may
- the third drawing unit 203A draws the image area (an example of the third image area) of the remaining layers in the distribution image H20 for the distribution user (see FIG. 12). That is, the third drawing unit 203A draws image areas of layers other than the distribution avatar display layer J0 and the interface display layer J2 in the distribution image H20. For example, the third drawing unit 203A may draw each image area of the background layer J3 and the auxiliary information display layer J5 according to the hierarchical structure of the distribution image H20.
- the distribution image H20 includes, in order from the front side, the distribution avatar display layer J0, the interface display layer J2, the auxiliary information display layer J5, the user avatar display layer J1, and the background layer J3. include.
- the third rendering unit 203A renders each image area of the auxiliary information display layer J5, the user avatar display layer J1, and the background layer J3.
- the third drawing unit 203A in response to a gift drawing instruction from the viewing user information acquisition unit 222A, which will be described later, displays the drawing information of the gift object stored in the first storage unit 240A, for example, in the auxiliary information display layer J5.
- a gift object in the image area may be drawn.
- the drawing information related to the gift object may be stored in the first storage unit 240A for each gift ID.
- the gift drawing instruction may include a gift ID specifying the gift object to be drawn and coordinate information.
- the coordinate information is information that indicates the drawing position in the distribution image H20, and may change in time series.
- the coordinate information may include motion information representing the motion of the gift object.
- the motion information may represent, for example, motion such as rolling and falling. As a result, it is possible to diversify the expression of the gift object.
- the third drawing unit 203A in response to a viewing user drawing instruction from a viewing user information acquisition unit 222A, which will be described later, acquires the drawing information of the avatar stored in the first storage unit 240A, for example (see Table 700 in FIG. 16).
- the viewer avatar M1 (viewing user's viewer avatar M1) in the image area of the user avatar display layer J1 may be drawn.
- the viewer user drawing instruction may include various information for drawing the viewer avatar M1 of the viewer user who is viewing the specific content.
- the various information for drawing the viewer avatar M1 may include information of the viewer avatar M1 (for example, avatar ID, face part ID, hairstyle part ID, clothing part ID, etc.) and coordinate information.
- the viewing user drawing instruction may include information representing the state of the viewing user (or information representing the state of the viewer avatar M1 based thereon).
- the movement of the viewing user can be expressed in the distribution image H20 for the distribution user, so that the feeling of unity between the distribution user and the viewing user can be enhanced.
- the third drawing unit 203A may draw various comments in the image area of the user avatar display layer J1 in response to a comment drawing instruction from the viewing user information acquisition unit 222A, which will be described later.
- the comment drawing instruction may include a comment ID and coordinate information.
- the drawing information related to the standard comment may be stored in the first storage unit 240A for each comment ID.
- Text information (for example, a message such as a chat) that can be transmitted from the terminal device 20B on the content viewing side may be used instead of the comment ID related to the standard comment.
- the third drawing unit 203A may draw a background image corresponding to the background ID selected by the distribution user on the background layer J3.
- a background image may be stored in the first storage unit 240A for each background ID. Note that the background image may be customized by the distribution user.
- the second drawing processing unit 200B draws the above-described home image H1 (see FIG. 6) (an example of the image in the second virtual space).
- the second drawing processing unit 200B generates the home image H1 (see FIG. 6), for example, at a predetermined frame rate.
- the second drawing processing unit 200B includes a fourth drawing unit 204B and a fifth drawing unit 205B.
- the fourth drawing unit 204B draws the image area (an example of the fourth image area) of the interface display layer J2 in the home image H1 (see FIG. 6).
- the interface display layer J2 in the home image H1 is as described above.
- the fourth drawing section 204B may change the arrangement of the plurality of operation regions G300 in response to a command from the first input processing section 252A, which will be described later (see FIGS. 6 and 7A).
- the fifth rendering unit 205B renders the image area (an example of the fifth image area) of the remaining layers in the home image H1 (see FIG. 6). That is, the fifth drawing unit 205B draws the image area of the layers other than the interface display layer J2 in the home image H1. For example, the fifth drawing unit 205B may draw each image area of the background layer J3 and the auxiliary information display layer J5 according to the hierarchical structure of the home image H1. As a result, it is possible to efficiently provide various types of information that the user may need without greatly increasing the processing load related to drawing.
- the home image H1 includes the interface display layer J2, the auxiliary information display layer J5, and the background layer J3 in order from the front side.
- the fifth drawing unit 205B draws each image area of the auxiliary information display layer J5 and the background layer J3.
- the first information generation unit 210A generates various types of information (hereinafter referred to as "distribution user information") related to distribution users.
- the distribution user information includes information representing the status of the distribution user (the status of the distribution user detected by the first detection section 211A) used by the first user interface section 250A, which will be described later.
- the distribution user information includes information necessary for drawing the distribution image H21 (see FIG. 13) for the viewing user.
- the distribution user information includes information (for example, avatar ID, face part ID, hairstyle part ID, clothing part ID, etc.) for drawing the distributor avatar M2.
- the distribution user information may include the voice information acquired via the microphone of the input unit 24 or the like.
- the first information generator 210A includes a first detector 211A, a first voice information generator 212A, and a first text information generator 213A.
- the first detection unit 211A detects the distribution user's state (orientation, position, movement, or the like of the user). 211 A of 1st detection parts may detect a distribution user's state via the input part 24 mentioned above. While the home image H1 is being rendered, the first detection unit 211A may detect the state of the distribution user, for example, at intervals corresponding to the frame rate of the home image H1. Further, while the distribution image H20 for the distribution user is being drawn, the first detection unit 211A detects the state of the distribution user at intervals corresponding to the frame rate of the distribution image H20 for the distribution user (see FIG. 12), for example. may be detected. As a result, it is possible to detect (update) the status of the broadcast user (and accordingly the display of the broadcaster avatar M2) at a high frequency.
- the first detection unit 211A may switch between on/off states according to the operation of the distribution user. For example, when the distribution user operates a distribution start button (see the operation area G107 in FIG. 11) that can be displayed on the interface display layer J2, the operation is detected by the first input detection unit 251A described later, and the operation is detected by the first input detection unit 251A described later.
- the first detection unit 211A may transition to the ON state in response to a command from the 1-input processing unit 252A.
- By simplifying the display of the interface display layer J2 it is possible to improve the visibility of the layers behind it.
- the first voice information generation unit 212A generates voice information based on the delivery user's utterance. For example, the first voice information generation unit 212A generates the voice information of the distribution user via the microphone of the input unit 24 or the like. Note that the first voice information generation unit 212A may process the voice data obtained via the input unit 24 to generate voice information of the distribution user. In this case, the processing method may be selectable by the distribution user, or the processing of the audio data may be manually realized by the distribution user.
- the first audio information generation unit 212A may be switched between on/off states according to the operation of the distribution user. For example, when the distribution user operates a mute button (not shown) that can be displayed on the interface display layer J2, the operation is detected by the first input detection unit 251A described later, and the first input processing unit 252A described later detects the operation. The first audio information generating section 212A may transition to the off state in response to the command.
- the first text information generation unit 213A generates text information based on the distribution user's speech and/or character input. For example, the first text information generation unit 213A generates text information when the distribution user interacts (chat, etc.) with the viewing user.
- the first text information generation unit 213A may switch between on/off states according to the operation of the distribution user. For example, when the distribution user operates a comment button (see the operation area G101 in FIG. 11) that can be displayed on the interface display layer J2, the operation is detected by the first input detection unit 251A described later, and the first input detection unit 251A described later detects the operation.
- the first text information generating section 213A may transition to the ON state in response to a command from the input processing section 252A. As a result, it is possible to simplify (visible) the display of the interface display layer J2 while increasing the variations that can be operated via the interface display layer J2.
- the first information generator 210A generates information representing the status of the broadcaster (or information representing the status of the broadcaster avatar M2 based thereon) (hereinafter simply referred to as "status information of the broadcaster avatar M2"), Information (for example, avatar ID, face part ID, hairstyle part ID, clothing part ID, etc.) of the distributor avatar M2 and optionally voice information and text information may be included in the distribution user information as a set.
- the first information generation unit 210A generates the distribution user information in such a manner that a time stamp is attached to dynamically changing information such as the state information, voice information, and text information of the distributor avatar M2.
- the terminal device 20B on the content viewing side distributes the distribution image H21 as described above and accompanied by audio output based on the received distribution user information. It can be output as specific content by the user.
- the first communication unit 220A communicates with the server device 10 and other terminal devices 20 (for example, the terminal device 20B on the content viewing side).
- the first communication unit 220A includes a distribution processing unit 221A and a viewing user information acquisition unit 222A.
- the distribution processing unit 221A transmits the distribution user information generated by the first information generation unit 210A to the terminal device 20B on the content viewing side. Note that, in a modified example, the distribution processing unit 221A may transmit moving image data based on the distribution image H20 generated by the first drawing processing unit 200A to the terminal device 20B.
- the distribution processing unit 221A may transmit the distribution user information in real time to the terminal device 20B on the content viewing side (that is, may realize live distribution) in response to an operation by the distribution user, for example.
- the distribution user information may be transmitted in real time to the terminal device 20B on the content viewing side.
- the distribution processing unit 221A may transmit the video data after manual editing by the distribution user or the distribution user information after editing to the terminal device 20B on the content viewing side.
- the viewing user information acquisition unit 222A acquires various viewing user information from the terminal device 20B on the content viewing side.
- the viewing user information acquisition unit 222A generates the above-described gift drawing instruction, viewing user drawing instruction, comment drawing instruction, collaboration approval/disapproval instruction, etc. based on the obtained viewing user information.
- the viewing user information includes information necessary for generating these gift drawing instructions, viewing user drawing instructions, comment drawing instructions, collaboration approval/disapproval instructions, and the like.
- the first display unit 230A displays the distribution image H20 (see FIG. 12) for the distribution user generated by the first drawing processing unit 200A and the home image H1 generated by the second drawing processing unit 200B on the terminal device 20A. Output to unit 23 .
- the display unit 23 is in the form of a head-mounted display as an example.
- the first storage unit 240A stores the above-described avatar drawing information (see table 700 in FIG. 16) and the like.
- the first user interface unit 250A detects various inputs of the distribution user via the input unit 24, and executes processing according to the various inputs.
- the first user interface portion 250A includes a first input detection portion 251A and a first input processing portion 252A.
- the first input detection unit 251A detects various inputs by the distribution user via the interface display layer J2 described above. For example, the first input detection unit 251A detects inputs through various operation areas G100 (see FIG. 11), the smartphone small window area G120 (see FIG. 12), the operation area G300, and the like.
- the first input detection unit 251A detects the state of the distribution user detected by the first detection unit 211A and the state of the interface display layer J2 (for example, the positions and display states of various operation regions G100 and smartphone small window regions G120). ), an input by the distribution user via the interface display layer J2 may be detected. For example, when the first input detection unit 251A detects an action of tapping on one of the various operation areas G100, it may detect a selection operation on the one operation area. This allows the user to perform a selection operation with a simple movement in the virtual space.
- the first input processing unit 252A executes various processes (examples of various second processes) in response to various inputs detected by the first input detection unit 251A in the home image H1.
- Various types of processing are arbitrary, but as described above, for example, processing for changing the arrangement of the user interface (for example, a plurality of planar operation regions G300), or moving the user to an arbitrary location or a specific location in the virtual space. It may be a process of moving (for example, a process of moving a distribution user to a location for distribution) or the like.
- Various types of processing may be appropriately accompanied by various types of drawing processing by the second drawing processing unit 200B and screen transitions (for example, movement in virtual space).
- the first input processing unit 252A performs various processes (examples of various first processes) according to various inputs detected by the first input detection unit 251A in the distribution image H20 for the distribution user (see FIG. 12). Run.
- the various types of processing are optional, but as described above, for example, the specific content distribution start processing, the specific content distribution end processing, the voice input processing, the distribution user receiving a gift, and the like may be performed.
- Various types of processing may be appropriately accompanied by various types of drawing processing by the first drawing processing unit 200A and screen transitions (for example, movement in virtual space).
- processing for transitioning to a state in which processing for receiving a gift can be executed includes drawing of notification information (notification information to the effect that a gift has been given) by the third drawing unit 203A of the first drawing processing unit 200A,
- the second drawing unit 202A of the first drawing processing unit 200A may draw a receiving button.
- the distribution image H20 for the distribution user has a plurality of layers as described above. can be reduced to For example, when distributing specific content in a moving image format, the image area of the distribution avatar display layer J0 out of the image area of the distribution avatar display layer J0 and the image area of the interface display layer J2 is set higher than the image area of the interface display layer J2.
- drawing (updating) with high frequency it is possible to efficiently reduce the processing load as a whole when drawing the distribution image H20 for the distribution user.
- layering the distribution image H20 for distribution users it is possible to effectively give the user a sense of distance as "space”.
- layering the distribution image H20 for the distribution user for example, it is possible to arrange it at an appropriate distance for each attribute of the image area of each layer. It is possible to draw the easily visible distributor avatar M2 and the like.
- FIG. 17 is a schematic block diagram showing the functions of the terminal device 20B on the content viewing side.
- the terminal device 20B includes a first drawing processing unit 200A', a second drawing processing unit 200B, a second information generation unit 210B, a second communication unit 220B, and a second display unit 230B. , a second storage unit 240B and a second user interface unit 250B.
- the terminal control unit 25 of the terminal device 20B cooperates with the terminal communication unit 21, the terminal storage unit 22, the display unit 23, and the input unit 24 to execute the virtual reality application.
- the second storage unit 240B can be realized by the terminal storage unit 22 of the terminal device 20B shown in FIG.
- the first drawing processing unit 200A' draws a distribution image H21 (see FIG. 13) for the viewing user in response to a distribution user drawing instruction from a distribution user information acquisition unit 222B, which will be described later.
- the distribution user drawing instruction is generated based on the distribution user information (distribution user information generated by the first information generation unit 210A of the terminal device 20A) transmitted from the terminal device 20A on the content distribution side.
- the first drawing processing unit 200A' may generate the distribution image H21 (see FIG. 13) for the viewing user, for example, at a frame rate corresponding to the update period of the distribution user information.
- the first drawing processing unit 200A' includes a first drawing unit 201B, a second drawing unit 202B, and a third drawing unit 203B.
- the first drawing unit 201B draws the image area (an example of the first image area) of the distribution avatar display layer J0 in the distribution image H21 for the viewing user (see FIG. 13).
- the distribution avatar display layer J0 in the distribution image H21 for the viewing user is as described above.
- the first drawing unit 201B draws the image of the distributor in the image area of the distribution avatar display layer J0.
- An avatar M2 may be drawn. In this case, since the avatar drawing information stored in the first storage unit 240A is used, the distributor avatar M2 can be drawn with relatively low load processing.
- the operation of the first drawing unit 201B has the same function as the first drawing unit 201A of the terminal device 20A on the content distribution side described above.
- various information for drawing the distributor avatar M2 may be included in the distribution user drawing instruction. That is, the distribution user drawing instruction includes information about the distributor avatar M2 (for example, avatar ID, face parts ID, hairstyle parts ID, clothing parts ID, etc.), information indicating the distribution user's status (or information indicating the distribution user's avatar M2 based thereon). information representing the state of the distributor avatar M2 (for example, avatar ID, face parts ID, hairstyle parts ID, clothing parts ID, etc.), information indicating the distribution user's status (or information indicating the distribution user's avatar M2 based thereon). information representing the state of
- the second drawing unit 202B draws the image area (an example of the second image area) of the interface display layer J2 in the distribution image H21 for the viewing user (see FIG. 13).
- the operation of the second drawing unit 202B has the same function as that of the second drawing unit 202A of the terminal device 20A on the content delivery side described above.
- an operation area different from the interface display layer J2 in the distribution image H20 for the distribution user may be drawn on the interface display layer J2 in the distribution image H21 for the viewing user.
- the second drawing unit 202B may draw an operation area G108 for giving a gift to the distribution user, an operation area G109 for transmitting a request for collaboration distribution to the distribution user, and the like. good.
- the request for collaboration distribution may be issued from the distribution user side.
- the operation area for transmitting the request for collaboration distribution to the viewing user is the distribution image H20 for the distribution user ( (see FIG. 12).
- the third drawing unit 203B draws the image area (an example of the third image area) of the remaining layers in the distribution image H21 for the viewing user (see FIG. 13).
- the operation of the third drawing unit 203B has the same function as that of the third drawing unit 203A of the terminal device 20A on the content distribution side described above.
- the content of drawing in the auxiliary information display layer J5 may be changed as appropriate.
- the viewer avatar M1 in the delivery image H21 for the viewer user, the viewer avatar M1 (see FIG. 12) may not be drawn as shown in FIG.
- the second drawing processing unit 200B draws the above-described home image H1 (see FIG. 6) (an example of the image of the second virtual space).
- the second drawing processing unit 200B generates the home image H1 (see FIG. 6), for example, at a predetermined frame rate.
- the second drawing processing unit 200B includes a fourth drawing unit 204B and a fifth drawing unit 205B.
- the second drawing processing section 200B may be substantially the same as the second drawing processing section 200B of the terminal device 20A. As a result, an efficient configuration using common hardware resources can be realized.
- the second information generation unit 210B generates various types of viewing user information related to viewing users.
- Various types of viewing user information include information representing the state of the viewing user (the state of the viewing user detected by the second detection unit 211B) used by the second user interface unit 250B, which will be described later.
- various viewing user information is used to generate a gift drawing instruction, a viewing user drawing instruction, a comment drawing instruction, a collaboration approval/disapproval instruction, etc. in the viewing user information acquisition unit 222A of the terminal device 20A on the content delivery side described above. contains the information required for
- the second information generator 210B includes a second detector 211B, a second voice information generator 212B, and a second text information generator 213B.
- the second detection unit 211B detects the state of the viewing user (orientation, position, movement, or the like of the user).
- the second detection unit 211B may detect the state of the viewing user via the input unit 24 described above.
- the second detection unit 211B may detect the state of the viewing user, for example, in a cycle corresponding to the frame rate of the home image H1. Further, while the distribution image H21 for the viewing user (see FIG. 13) is being drawn, the second detection unit 211B detects the state of the viewing user, for example, at a cycle corresponding to the update cycle of the distribution user information. good too.
- the second audio information generation unit 212B generates audio information based on the speech of the viewing user. For example, the second audio information generation unit 212B generates the audio information of the viewing user via the microphone of the input unit 24 or the like.
- the second audio information generation unit 212B may process the audio data obtained via the input unit 24 to generate the audio information of the viewing user. In this case, the processing method may be selectable by the viewing user, or the processing of the audio data may be manually realized by the viewing user.
- the second audio information generation unit 212B may switch between on/off states according to the operation of the viewing user. For example, when the viewing user operates a mute button (not shown) that can be displayed on the interface display layer J2 in the distribution image H21 (see FIG. 13) for the viewing user, the operation is performed by the second input detection unit 251B described later.
- the second voice information generating section 212B may transition to the OFF state in response to a command from the second input processing section 252B that is detected and described later.
- the second text information generation unit 213B generates text information based on the viewing user's speech and/or character input. For example, the second text information generation unit 213B generates text information when the viewing user interacts (chat, etc.) with the distribution user.
- the second text information generation unit 213B may switch between on/off states according to the operation of the viewing user. For example, when the viewing user operates a comment button (not shown in FIG. 13) that can be displayed on the interface display layer J2, the operation is detected by the second input detection unit 251B described later, and the second input detection unit 251B described later detects the operation.
- the second text information generating section 213B may transition to the ON state in response to a command from the input processing section 252B.
- the second information generation unit 210B generates information representing the state of the viewing user (or information representing the state of the viewer avatar M1 based thereon) (hereinafter simply referred to as "state information of the viewer avatar M1"),
- the viewer avatar M1 information (for example, avatar ID, face part ID, hairstyle part ID, clothing part ID, etc.) and appropriate voice information may be included as a set in the viewing user information.
- the second information generation unit 210B may generate the viewing user information in such a manner as to add a time stamp to dynamically changing information such as the state information and voice information of the viewer avatar M1.
- the terminal device 20A on the content delivery side can draw the viewer avatar M1 in the user avatar display layer J1 in the delivery image H20 as described above, based on the received viewing user information. .
- the second communication unit 220B communicates with the server device 10 and other terminal devices 20 (for example, the terminal device 20A on the content distribution side).
- the second communication unit 220B includes a distribution processing unit 221B and a distribution user information acquisition unit 222B.
- the distribution processing unit 221B transmits the viewing user information generated by the second information generation unit 210B to the terminal device 20A on the content distribution side.
- the distribution user information acquisition unit 222B acquires distribution user information from the terminal device 20A on the content distribution side, and gives the distribution user drawing instruction described above to the first drawing processing unit 200A'.
- the distribution user information is generated by the distribution processing unit 221A of the terminal device 20A on the content distribution side as described above.
- the second display unit 230B outputs the distribution image H20 and the home image H1 generated by the first drawing processing unit 200A' and the second drawing processing unit 200B to the display unit 23 of the terminal device 20B.
- the display unit 23 is in the form of a head-mounted display as an example.
- the second storage unit 240B stores the above-described avatar drawing information (see table 700 in FIG. 16) and the like.
- the second user interface unit 250B detects various inputs of the viewing user via the input unit 24, and executes processing according to the various inputs.
- the second user interface section 250B includes a second input detection section 251B and a second input processing section 252B.
- the second input detection unit 251B detects various inputs by the viewing user via the interface display layer J2 described above. For example, the second input detection unit 251B detects an input through the interface display layer J2 drawn by the second drawing unit 202B and the fourth drawing unit 204B described above.
- the second input detection unit 251B detects the state of the viewing user detected by the second detection unit 211B and the state of the interface display layer J2 (for example, the positions and display states of various operation regions G100 and smartphone small window regions G120). ), an input by the viewing user via the interface display layer J2 may be detected.
- the second input detection unit 251B detects various operations performed by the viewing user on the plurality of planar operation areas G300 described above, and in the delivery image H21 for the viewing user (see FIG. 13), Various operations performed by the viewing user on various operation areas (such as an operation area for transmitting a request for collaboration delivery to the delivery user) are detected.
- the second input processing unit 252B executes various processes (an example of various second processes) according to various inputs detected by the second input detection unit 251B in the home image H1.
- Various types of processing are arbitrary, but as described above, for example, processing for changing the arrangement of the user interface (for example, a plurality of planar operation regions G300), or moving the user to an arbitrary location or a specific location in the virtual space. It may be a process of moving (for example, a process of moving the viewing user to a viewing location) or the like.
- Various types of processing may be appropriately accompanied by various types of drawing processing by the second drawing processing unit 200B and screen transitions (for example, movement in virtual space).
- the process for transitioning to a state in which the process of giving a gift can be executed may be accompanied by the drawing of a gift selection screen by the fourth drawing unit 204B of the second drawing processing unit 200B.
- the second input processing unit 252B performs various processes (an example of various second processes) according to various inputs detected by the second input detection unit 251B in the distribution image H21 for the viewing user (see FIG. 13). Run.
- Various types of processing are arbitrary, but as described above, for example, processing for ending viewing of specific content, processing for voice input, processing for presenting a gift to a distribution user, and the like may be performed.
- Various processes may be accompanied by various drawing processes and screen transitions (for example, movement in virtual space) by the first drawing processing unit 200A' as appropriate.
- the process of giving a gift to a distribution user may be accompanied by drawing notification information (notification information indicating that the gift has been received) by the third drawing unit 203B of the first drawing processing unit 200A'.
- FIG. 18 is a schematic block diagram showing the functions of the server device 10.
- the server device 10 mainly includes a communication unit 100 and a storage unit 110.
- the communication unit 100 transmits various information (for example, distribution user information, viewing user information, etc.) required for distribution and/or viewing of specific content by a distribution user to the terminal device 20A on the content distribution side and/or content viewing/listening. communication with the terminal device 20B on the other side.
- various information for example, distribution user information, viewing user information, etc.
- the storage unit 110 can store various information required for distribution and/or viewing of specific content by distribution users.
- the distribution image H21 for the viewing user has a plurality of layers as described above.
- the distribution image H21 for the viewing user is drawn by drawing (updating) the image area in the front area viewed from the user's viewpoint in the interface display layer J2 more frequently than the other image areas. It is possible to efficiently reduce the processing load as a whole at the time of processing. Further, by layering the distribution image H21 for the viewing user, it is possible to effectively give the user a sense of distance as a "space".
- the distribution image H21 for the viewing user for example, it is possible to arrange it at an appropriate distance for each attribute of the image area of each layer, for example, drawing a highly operable user interface in a manner that does not cause motion sickness, It is possible to achieve both drawing of the easily visible operation area G300 and the like.
- the server device 10 does not have a drawing function for the home image H1 and the delivery image H2 described above, and such a drawing function is provided by the terminal device 20A and/or the terminal device 20A on the content distribution side as described above. It is implemented by the terminal device 20B on the content viewing side.
- part or all of the rendering function of the home image H1 and/or the distribution image H2 may be realized by the server device 10.
- FIG. This may be the same as appropriate for functions other than the drawing function.
- FIG. 19 is a flowchart showing an example of the operation of the terminal device 20A on the content delivery side performed in the virtual reality generation system 1 shown in FIG. 1, up to the start of delivery of specific content.
- step S190 the user who will be the distribution user wears the terminal device 20A, which is a head-mounted display, and activates the virtual reality application.
- the user is positioned in the home space (step S191) and can visually recognize the home image H1.
- the user operates the operation area of the interface display layer J2 in the home image H1 to move to the content production space (step S192).
- the distribution user appropriately prepares such as changing the clothes of the distributor avatar M2 (step S193), then stands in front of the mirror 805 and visually recognizes the distribution image H20 (preparation image before distribution). (step S194).
- the distribution image H20 as a preparation image before distribution may include only the distribution avatar display layer J0 and the interface display layer J2, or may include only the distribution avatar display layer J0, the interface display layer J2, and the background layer J3.
- an operation area G107 (see FIG. 11) for starting distribution may be drawn on the interface display layer J2 in the distribution image H20 as a preparatory image before distribution.
- delivery may begin upon entering the content creation space.
- the distribution user When the distribution user sees the distribution image H20 and decides to start distribution, the distribution user operates the corresponding operation area G107 (see FIG. 11) on the interface display layer J2 to start distribution of the specific content including the distribution image H20. (step S195).
- an operation area G300 (see FIGS. 6 and 7A, etc.) in which the specific content can be selected is displayed on the interface display layer J2 in the home image H1 of the terminal device 20B on the content viewing side. ) is drawn.
- FIG. 20 is a flowchart showing an example of the operation of the terminal device 20B on the content viewing side performed in the virtual reality generation system 1 shown in FIG. 1, up to the start of viewing the specific content.
- step S200 a user who is a viewing user wears the terminal device 20B, which is a head-mounted display, and activates the virtual reality application. Thereby, the user is positioned in the home space (step S201) and can visually recognize the home image H1.
- the user operates the operation area G300 of the interface display layer J2 in the home image H1 to select desired specific content (step S202). In this way, the viewing user can start viewing the specific content (step S203).
- FIG. 21 shows operations of the terminal device 20A on the content distribution side, the terminal device 20B on the content viewing side, and the server device 10 performed in the virtual reality generation system 1 shown in FIG.
- FIG. 10 is a flow diagram showing an example of an operation during (that is, while a viewing user is viewing specific content).
- the left side shows the operation performed by one terminal device 20A on the content distribution side
- the center shows the operation performed by the server device 10 (here, one server device 10).
- the right side shows the operation performed by the terminal device 20B on the one content viewing side.
- step S210 the distribution user performs a performance in front of the mirror 805.
- the terminal device 20A on the content distribution side generates distribution user information corresponding to the performance or the like.
- the terminal device 20A on the content delivery side transmits such delivery user information to the server device 10 .
- the delivery user information the correspondence relationship between the transmitted (transmitted) information and the time stamp based on the reference time is clear in both the terminal device 20A on the content delivery side and the terminal device 20B on the content viewing side. As long as the conditions are satisfied, they may be mutually multiplexed by any multiplexing method and transmitted to the server device 10 .
- the terminal device 20B on the content viewing side can appropriately process according to the time stamp corresponding to the distribution user information when receiving the distribution user information.
- each of the broadcast user information may be transmitted via separate channels, or some of the broadcast user information may be transmitted via the same channel.
- a channel may include timeslots, frequency bands, and/or spreading codes, and the like. Note that the method of distributing moving images (specific content) using the reference time may be implemented in the manner disclosed in Japanese Patent No. 6803485, which is incorporated herein by reference.
- step S212 the terminal device 20A on the content distribution side continuously receives the distribution user information for drawing the distribution image H21 for the viewing user. 20B, and outputs a delivery image H20 (see FIG. 12) for the delivery user to the terminal device 20A on the content delivery side.
- the terminal device 20A on the content distribution side can perform the operations in steps S210 and S212 in parallel with the operations in steps S214 to S222 described below.
- step S214 the server device 10 transmits (transfers) the distribution user information continuously transmitted from the terminal device 20A on the content distribution side to the terminal device 20B on the content viewing side.
- the terminal device 20B on the content viewing side receives the distribution user information from the server device 10 and stores it in the storage unit 140.
- the terminal device 20B on the content viewing side receives data from the server device 10.
- the received delivery user information can be temporarily stored (buffered) in the terminal storage unit 22 (see FIG. 1).
- the terminal device 20B on the content viewing side receives and stores the delivery user information received and stored from the terminal device 20A on the content delivery side via the server device 10 in step S218. Using the information, a distribution image H21 for the viewing user is generated to reproduce the specific content.
- step S220 the terminal device 20B on the content viewing side generates viewing user information, and transmits the viewing user information to the terminal device on the content distribution side via the server device 10. 20A.
- the viewing user information may be generated only when, for example, the viewing user performs an operation to give a gift.
- step S222 the server device 10 transmits (transfers) the viewing user information received from the terminal device 20B on the content viewing side to the terminal device 20A on the content distribution side.
- step S224 the terminal device 20A on the content delivery side can receive the viewing user information via the server device 10.
- the terminal device 20A on the content delivery side can basically perform the same operation as in step S210.
- the terminal device 20A on the content distribution side generates a gift drawing instruction based on the viewing user information received in step S224, and draws the corresponding gift object on the auxiliary information display layer J5 in the distribution image H20.
- the process shown in FIG. 21 may be continuously executed until the distribution of the specific content by the distribution user is completed or until there are no more users viewing the specific content.
- the execution subject of each process can be changed in various manners as described above.
- the process of generating the distribution image H20 for the distribution user may be implemented by the server device 10 instead of the terminal device 20A.
- the process of generating the distribution image H21 for the viewing user may be implemented by the terminal device 20A or the server device 10.
- the data of the distribution image H21 for the viewing user may be received instead of the distribution user information.
- the process of drawing the gift object on the auxiliary information display layer J5 in the distribution image H20 based on the viewing user information may be realized by the server device 10 instead of the terminal device 20A.
- a wearable device head-mounted display
- Personal Computer may be configured as a metaverse (virtual space on the Internet) using a terminal device 20A (an example of a first terminal device) and a terminal device 20B (an example of a second terminal device).
- a modified example in such a mode will be described, but as a point of contention in the case of configuring as a metaverse, the scene (scene) that can be set, screen switching, and user interface (UI) will be described.
- UI user interface
- the description of the above-described embodiment is applied within a technically reasonable range.
- the user interface is implemented by the input section 24, first user interface section 250A, and/or second user interface section 250B in the above-described embodiments.
- the user interface preferably has a hierarchical menu of operations such as playback, stop, and language setting.
- operations such as playback, stop, and language setting.
- playback, stop, and language setting In other words, even when listening together using a single large screen, volume, subtitle language settings, play/stop/fast forward, etc. can be operated individually, and in addition to direct operations, those authority settings must be hierarchical. is preferred.
- Examples of user interfaces include [play video > list display > share/listen only to yourself > pin here/manipulate with everyone], [screen operation during playback > (if authorized) play/stop/fast forward /Rewind/End].
- each user turns their gaze to the screen when listening together, so it is easier to select the target direction than the usual facial expression emote selection.
- Explicit and quick setting eg, semi-automatic
- Examples of user interfaces include [communal listening screen > spectator mode > stick up, down, left and right changes to ⁇ smile mark, heart mark, applause mark, semi-auto ⁇ ], [semi-auto] When other people react, follow/automatically respond when there is cheering in the video].
- the lecturer can see the scenario, the materials to be shared on the screen, the display user interface, etc., while other users can see the screen.
- the menu nor the user interface may be displayed (although the virtual cast is displayed).
- An example of the user interface may be, for example, a mode such as [send to next page/text to be spoken now/next image]. Since the user who is the speaker cannot look back, the operation and preview, prepared images and text will be displayed on the screen.
- a user interface For the seated response mode, which is a scene in which the user answers at high speed using only options for quiz programs and online education, an example of a user interface is to display an answer option and a raise hand button for each user. You may allow In order not to be too free, once designated as an organizer, it is fixed in its user interface until certain conditions are met. It is a set of questions and answer options and may include a timer, score, number of coins to bet, etc.
- the hierarchy is [placement ⁇ shop ⁇ outdoor ⁇ purchase (balance/payment/loan) ⁇ confirmation ⁇ place ⁇ direction ⁇ other users can operate/cannot...]
- this is directly arranged in a three-dimensional space, it is difficult to align the directions because it can only be handled by "holding and dropping".
- subsequent attributes such as whether or not others can see, touch, and move. Therefore, as an example of the user interface, for example, it is possible to adopt a form of [Walk freely in 3D space ⁇ Open menu ⁇ Shop]. You may hide the menu and shop from others.
- the user's point of view if you only operate from your own point of view, it will be difficult when you want to see it from a bird's-eye view or a third-person point of view. Therefore, for switching the viewpoint, it may be possible to view in a bird's-eye view state when [overhead view] is selected on the user interface.
- An example of the user interface may be, for example, a form such as [Menu ⁇ Drone Photo ⁇ Confirm to enter drone operation mode].
- the avatar may be fixed, and an aerial shot may be taken from a stick-operated drone while creating motions with emotes or hands.
- the confirmation screen "Do you want to share?” is displayed in conjunction with the continuous shooting mode. Arrange several shots and select left and right/share vertically/filter/layout (vertically long, horizontally long, square)/delete, etc.].
- the game is a vehicle operation, such as a car racing game, ranking display, map display, etc.
- the control is replaced with a steering wheel operation, and when the user selects to get into the car, the user interface is restricted to the drive UI.
- the hand controller may be substituted for the steering wheel and accelerator/brake operations instead of the hands, so that left-right movement is the steering wheel, and up-down movement is the gear change and accelerator/brake operation.
- a view related to smartphone compatibility may be displayed. Unlike the so-called virtual smartphone display, this is related to smartphone purchase decisions, and by displaying the user interface (smartphone UI) in a way that hides the three-dimensional background within the screen, it is easier to make decisions. do.
- a user interface for example, how to use trackpads and sticks mounted on various hand controllers (swipe, touch, long press, drag, etc.) can be defined according to the user's environment.
- the hand controllers, trackpads, and sticks referred to here, including the hand controllers in the car racing game described above, may be real objects or virtual objects drawn in virtual space.
- 1 virtual reality generation system 3 network 10 server device 11 server communication unit 12 server storage unit 13 server control unit 20, 20A, 20B terminal device 21 terminal communication unit 22 terminal storage unit 23 display unit 24 input unit 25 terminal control unit 30A, 30B Studio unit 140 Storage unit 200 Communication unit 200A, 200A' First rendering processing unit 201A, 201B First rendering unit 202A, 202B Second rendering unit 203A, 203B Third rendering unit 200B Second rendering processing unit 204B Fourth rendering unit 205B Fifth rendering unit 210 Storage unit 210A First information generation unit 210B Second information generation unit 211A First detection unit 211B Second detection unit 212A First voice information generation unit 212B Second voice information generation unit 213A First text information generation unit 213B Second text information generation unit 220A First communication unit 220B Second communication unit 221A, 221B Distribution processing unit 222A Viewing user information acquisition unit 222B Distribution user information acquisition unit 230A First display unit 230B Second display unit 240A First storage unit 240B Second storage unit 250A First user interface unit 250B Second user
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
- Information Transfer Between Computers (AREA)
- Image Generation (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
サーバ装置10の構成について具体的に説明する。サーバ装置10は、サーバコンピュータにより構成される。サーバ装置10は、複数台のサーバコンピュータにより協動して実現されてもよい。例えば、サーバ装置10は、各種のコンテンツを提供するサーバコンピュータや、各種の認証サーバを実現するサーバコンピュータ等により協動して実現されてもよい。また、サーバ装置10は、Webサーバを含んでよい。この場合、後述する端末装置20の機能の一部は、Webサーバから受領したHTML文書やそれに付随する各種プログラム(Javascript)をブラウザが処理することによって実現されてもよい。
端末装置20の構成について説明する。図1に示すように、端末装置20は、端末通信部21と、端末記憶部22と、表示部23と、入力部24と、端末制御部25とを備える。
サーバ制御部13は、端末装置20と協動して、表示部23上に仮想空間の画像を表示し、仮想現実の進行やユーザの操作に応じて仮想空間の画像を更新していく。
上述した実施形態は、端末装置20A、20Bとして装着型装置(ヘッドマウントディスプレイ)を用いた態様について説明したが、本願発明に係る仮想現実生成システム1、を携帯電話、スマートフォン、タブレット端末、PC(Personal Computer)などを端末装置20A(第1端末装置の一例)、端末装置20B(第2端末装置の一例)として用いたメタバース(インターネット上の仮想空間)として構成してもよい。以下では、このような態様とした場合の変形例について説明するが、メタバースとして構成した場合の論点として、設定され得るシーン(場面)及び画面の切り替えとユーザインターフェース(UI)の態様について説明する。以下の説明で触れていない事項については、技術的に合理性のある範囲内で上述した実施形態の説明が適用される。なお、以下において、ユーザインターフェースは、上述した実施形態における、入力部24、第1ユーザインターフェース部250A、及び/又は第2ユーザインターフェース部250Bによって実現される。
3 ネットワーク
10 サーバ装置
11 サーバ通信部
12 サーバ記憶部
13 サーバ制御部
20、20A、20B 端末装置
21 端末通信部
22 端末記憶部
23 表示部
24 入力部
25 端末制御部
30A、30B スタジオユニット
140 記憶部
200 通信部
200A、200A’ 第1描画処理部
201A、201B 第1描画部
202A、202B 第2描画部
203A、203B 第3描画部
200B 第2描画処理部
204B 第4描画部
205B 第5描画部
210 記憶部
210A 第1情報生成部
210B 第2情報生成部
211A 第1検出部
211B 第2検出部
212A 第1音声情報生成部
212B 第2音声情報生成部
213A 第1テキスト情報生成部
213B 第2テキスト情報生成部
220A 第1通信部
220B 第2通信部
221A、221B 配信処理部
222A 視聴ユーザ情報取得部
222B 配信ユーザ情報取得部
230A 第1表示部
230B 第2表示部
240A 第1記憶部
240B 第2記憶部
250A 第1ユーザインターフェース部
250B 第2ユーザインターフェース部
251A 第1入力検出部
251B 第2入力検出部
252A 第1入力処理部
252B 第2入力処理部
Claims (27)
- 第1仮想空間の画像を含むコンテンツを配信する第1ユーザの状態を検出する第1検出部と、
前記第1ユーザに係る第1端末装置又は前記コンテンツを視聴する第2ユーザに係る第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な前記第1仮想空間の画像を描画する第1描画処理部と、
前記第1ユーザに対応付けられた第1表示媒体を記憶する第1記憶部と、を備え、
前記第1仮想空間の画像は、複数の階層を有し、
前記第1描画処理部は、
前記複数の階層のうちの、前記第1表示媒体に係る層の第1画像領域を描画する第1描画部と、
前記複数の階層のうちの、第1ユーザインターフェースに係る層の第2画像領域を描画する第2描画部とを備え、
前記第1描画部は、前記第1検出部により検出された前記第1ユーザの状態に基づいて、前記第1画像領域における前記第1表示媒体の状態を変化させる、情報処理システム。 - 前記第1表示媒体は、正面方向を有するキャラクタの形態であり、
前記第1検出部は、前記第1ユーザの状態として、現実空間における前記第1ユーザの向き、位置、及び動きのうちの少なくともいずれか1つのユーザ情報を検出し、
前記第1描画部は、前記ユーザ情報に応じた前記キャラクタの向き、位置、及び動きのうちの少なくともいずれか1つが実現されるように前記第1画像領域を描画する、請求項1に記載の情報処理システム。 - 前記第1描画部は、前記第1表示媒体を半透明に描画する、請求項1又は2に記載の情報処理システム。
- 前記第1表示媒体は、前記第1ユーザのアバタとして機能し、
前記第1描画部は、前記第1仮想空間に設けられる鏡に映る像に基づいて、前記第1仮想空間に配置される前記第1表示媒体を描画することで、前記第1画像領域を生成する、請求項1から3のうちのいずれか1項に記載の情報処理システム。 - 前記第1描画処理部は、前記複数の階層のうちの、背景に係る層と、背景よりも前記第1ユーザ用の視点に近い側にかつ前記第2画像領域よりも前記第1ユーザ用の視点から遠い側に位置する層のうちの、少なくともいずれか一方の層の第3画像領域を描画する第3描画部を更に備える、請求項4に記載の情報処理システム。
- 前記第3描画部は、前記第2ユーザに係る情報を、前記第3画像領域に含ませる、請求項5に記載の情報処理システム。
- 前記第2ユーザに対応付けられた第2表示媒体を記憶する第2記憶部を更に備え、
前記第3描画部は、前記第2ユーザに係る情報として、前記第2表示媒体を描画する、請求項6に記載の情報処理システム。 - 前記第3描画部は、前記第2ユーザからの入力情報、前記第1ユーザ及び/又は前記第2ユーザへの案内/お知らせ情報、及び、前記第1ユーザに贈られるギフトオブジェクトを含むアイテム情報のうちの少なくともいずれか1つを、前記第3画像領域に含ませる、請求項5に記載の情報処理システム。
- 前記第1検出部により検出された前記第1ユーザの状態と、前記第2画像領域の状態とに基づいて、前記第1ユーザによる前記第1ユーザインターフェースを介した入力を検出する第1入力検出部と、
前記第1入力検出部により検出された前記入力に応じた各種第1処理を実行する第1入力処理部とを更に備える、請求項1から8のうちのいずれか1項に記載の情報処理システム。 - 前記各種第1処理は、前記コンテンツの配信開始処理、前記コンテンツの配信終了処理、音声入力処理、ギフトを受け取る処理、前記第1表示媒体に対応付けて描画可能なアイテムの抽選処理、前記第1表示媒体に対応付けて描画可能なアイテムの選択/交換処理、文字入力処理、他のユーザに対する承認処理、前記コンテンツのパラメータの値を変化させる処理、及びこれらの処理を実行可能な状態に移行するための処理、のうちの少なくともいずれか1つを含む、請求項9に記載の情報処理システム。
- 前記第1描画処理部は、前記第1端末装置、前記第1端末装置に通信可能に接続される外部処理装置、前記第1端末装置に通信可能に接続される前記第2端末装置、前記第1端末装置に通信可能に接続される前記第2ユーザに係る端末装置であって前記第2端末装置とは異なる端末装置、又は、これらの任意の組み合わせ、により実現される、請求項1から10のうちのいずれか1項に記載の情報処理システム。
- 前記第1仮想空間は、円柱型又は天球型である、請求項1から11のうちのいずれか1項に記載の情報処理システム。
- 前記第1端末装置又は前記第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な第2仮想空間の画像を描画する第2描画処理部を更に備え、
前記第2仮想空間の画像は、複数の階層を有し、
前記第2描画処理部は、
前記複数の階層のうちの、第2ユーザインターフェースに係る層の第4画像領域を描画する第4描画部と、
前記複数の階層のうちの、前記第1ユーザ又は前記第2ユーザ用の視点から視て前記第4画像領域よりも背後側に位置する層の第5画像領域を描画する第5描画部とを備え、
前記第4描画部は、前記第2ユーザインターフェースに係る複数の平面状の操作領域を、球面又は湾曲面に沿って配置される態様で、描画する、請求項1から12のうちのいずれか1項に記載の情報処理システム。 - 前記第4描画部は、前記第2仮想空間における前記第2ユーザの位置に対応した位置に、前記第2仮想空間の上下方向に延在する所定基準軸に基づいて、前記複数の平面状の操作領域のうちの少なくとも一部を、前記所定基準軸まわりの第1曲面に沿って複数の列で描画する、請求項13に記載の情報処理システム。
- 前記第4描画部は、前記複数の平面状の操作領域のうちの他の一部を、前記所定基準軸まわりの第1曲面に対し前記第1ユーザ又は前記第2ユーザ用の視点から視て背後側にオフセットした第2曲面に沿って複数の列で描画する、請求項14に記載の情報処理システム。
- 前記第2仮想空間に入室している第2ユーザの状態を検出する第2検出部と、
前記第2検出部により検出された前記第2ユーザの状態と、前記第4画像領域の状態とに基づいて、前記第2ユーザによる前記第2ユーザインターフェースを介した入力を検出する第2入力検出部と、
前記第2入力検出部により検出された前記入力に応じた各種第2処理を実行する第2入力処理部とを更に備える、請求項15に記載の情報処理システム。 - 前記各種第2処理は、前記第2ユーザインターフェースの配置を変更する処理、前記第1ユーザを前記第1仮想空間の任意の場所又は特定の場所に移動させる処理、前記コンテンツの配信開始処理、前記コンテンツの視聴開始処理、前記コンテンツの視聴終了処理、音声入力処理、ギフトを贈る処理、アバタアイテムの抽選処理、アバタアイテムの選択/交換処理、文字入力処理、及び、これらの処理のいずれかを実行可能な状態に移行するための処理、のうちの少なくともいずれか1つを含む、請求項16に記載の情報処理システム。
- 前記特定の場所は、前記第1表示媒体に対応付けて描画可能なアイテムの選択であって、前記第1ユーザによる選択が可能となる場所を含む、請求項17に記載の情報処理システム。
- 前記第2曲面に沿った前記操作領域は、前記第1曲面に沿って描画される前記操作領域に対して、前記第2ユーザ用の視点から視て背後側に重なるように描画され、
前記第2入力処理部は、前記第2入力検出部により検出された所定入力に基づいて、前記第4描画部に、前記第1曲面に沿って描画されていた前記操作領域が前記第2曲面に沿い、かつ、前記第2曲面に沿って描画されていた前記操作領域が前記第1曲面に沿うように、前記第2ユーザインターフェースの配置を変更させる、請求項17又は18に記載の情報処理システム。 - 前記複数の平面状の操作領域には、前記各種第2処理が対応付けられる、請求項16から19のうちのいずれか1項に記載の情報処理システム。
- 前記第5画像領域は、前記複数の階層のうちの、背景に係る層の画像領域と、前記複数の階層のうちの、背景よりも前記第1ユーザ又は前記第2ユーザ用の視点に近い側に位置する層の画像領域であって、前記複数の平面状の操作領域のうちの少なくとも一部に関する情報を含む画像領域と、を含む、請求項13から20のうちのいずれか1項に記載の情報処理システム。
- 前記第2描画処理部は、前記第1端末装置、前記第1端末装置に通信可能に接続される外部処理装置、前記第1端末装置に通信可能に接続される前記第2端末装置、前記第1端末装置に通信可能に接続される前記第2ユーザに係る端末装置であって前記第2端末装置とは異なる端末装置、又は、これらの任意の組み合わせ、により実現される、請求項13から21のうちのいずれか1項に記載の情報処理システム。
- 前記第2仮想空間は、円柱型又は天球型である、請求項13から22のうちのいずれか1項に記載の情報処理システム。
- コンピュータにより実行される情報処理方法であって、
第1仮想空間の画像を含むコンテンツを配信する第1ユーザの状態を検出する第1検出ステップと、
前記第1ユーザに係る第1端末装置又は前記コンテンツを視聴する第2ユーザに係る第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な前記第1仮想空間の画像を描画する第1描画処理ステップと、
前記第1ユーザに対応付けられた第1表示媒体を記憶する第1記憶ステップと、を備え、
前記第1仮想空間の画像は、複数の階層を有し、
前記第1描画処理ステップは、
前記複数の階層のうちの、前記第1表示媒体に係る層の第1画像領域を描画する第1描画ステップと、
前記複数の階層のうちの、第1ユーザインターフェースに係る層の第2画像領域を描画する第2描画ステップとを備え、
前記第1描画ステップは、前記第1検出ステップにより検出された前記第1ユーザの状態に基づいて、前記第1画像領域における前記第1表示媒体の状態を変化させる、情報処理方法。 - 前記第1端末装置又は前記第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な第2仮想空間の画像を描画する第2描画処理ステップを更に備え、
前記第2仮想空間の画像は、複数の階層を有し、
前記第2描画処理ステップは、
前記複数の階層のうちの、第2ユーザインターフェースに係る層の第4画像領域を描画する第4描画ステップと、
前記複数の階層のうちの、前記第1ユーザ又は前記第2ユーザ用の視点から視て前記第4画像領域よりも背後側に位置する層の第5画像領域を描画する第5描画ステップとを備え、
前記第4描画ステップは、前記第2ユーザインターフェースに係る複数の平面状の操作領域を、球面又は湾曲面に沿って配置される態様で、描画する、請求項24に記載の情報処理方法。 - コンピュータに、
第1仮想空間の画像を含むコンテンツを配信する第1ユーザの状態を検出する第1検出機能と、
前記第1ユーザに係る第1端末装置又は前記コンテンツを視聴する第2ユーザに係る第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な前記第1仮想空間の画像を描画する第1描画処理機能と、
前記第1ユーザに対応付けられた第1表示媒体を記憶する第1記憶機能と、を実行させ、
前記第1仮想空間の画像は、複数の階層を有し、
前記第1描画処理機能は、
前記複数の階層のうちの、前記第1表示媒体に係る層の第1画像領域を描画する第1描画機能と、
前記複数の階層のうちの、第1ユーザインターフェースに係る層の第2画像領域を描画する第2描画機能とを備え、
前記第1描画機能は、前記第1検出機能により検出された前記第1ユーザの状態に基づいて、前記第1画像領域における前記第1表示媒体の状態を変化させる、情報処理プログラム。 - 前記第1端末装置又は前記第2端末装置を介して前記第1ユーザ又は前記第2ユーザに視認可能な第2仮想空間の画像を描画する第2描画処理機能を、前記コンピュータに更に実行させ、
前記第2仮想空間の画像は、複数の階層を有し、
前記第2描画処理機能は、
前記複数の階層のうちの、第2ユーザインターフェースに係る層の第4画像領域を描画する第4描画機能と、
前記複数の階層のうちの、前記第1ユーザ又は前記第2ユーザ用の視点から視て前記第4画像領域よりも背後側に位置する層の第5画像領域を描画する第5描画機能とを備え、
前記第4描画機能は、前記第2ユーザインターフェースに係る複数の平面状の操作領域を、球面又は湾曲面に沿って配置される態様で、描画する、請求項26に記載の情報処理プログラム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22779789.1A EP4319144A1 (en) | 2021-03-30 | 2022-03-03 | Information processing system, information processing method, and information processing program |
CN202280005720.XA CN116114252A (zh) | 2021-03-30 | 2022-03-03 | 信息处理系统、信息处理方法、信息处理程序 |
JP2023510714A JP7449523B2 (ja) | 2021-03-30 | 2022-03-03 | 情報処理システム、情報処理方法、情報処理プログラム |
US18/215,201 US20230368464A1 (en) | 2021-03-30 | 2023-06-28 | Information processing system, information processing method, and information processing program |
JP2024024798A JP2024063062A (ja) | 2021-03-30 | 2024-02-21 | 情報処理システム、情報処理方法、情報処理プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021057215 | 2021-03-30 | ||
JP2021-057215 | 2021-03-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/215,201 Continuation US20230368464A1 (en) | 2021-03-30 | 2023-06-28 | Information processing system, information processing method, and information processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022209564A1 true WO2022209564A1 (ja) | 2022-10-06 |
Family
ID=83458554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/009184 WO2022209564A1 (ja) | 2021-03-30 | 2022-03-03 | 情報処理システム、情報処理方法、情報処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230368464A1 (ja) |
EP (1) | EP4319144A1 (ja) |
JP (2) | JP7449523B2 (ja) |
CN (1) | CN116114252A (ja) |
WO (1) | WO2022209564A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320364A (zh) * | 2023-05-25 | 2023-06-23 | 四川中绳矩阵技术发展有限公司 | 一种基于多层显示的虚拟现实拍摄方法及显示方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH053485B2 (ja) | 1986-03-31 | 1993-01-14 | Shibaura Eng Works Ltd | |
WO2019088273A1 (ja) * | 2017-11-04 | 2019-05-09 | ナーブ株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
JP2021005237A (ja) * | 2019-06-26 | 2021-01-14 | 株式会社コロプラ | プログラム、情報処理方法、及び情報処理装置 |
JP2021020074A (ja) * | 2020-09-24 | 2021-02-18 | 株式会社コロプラ | ゲームプログラム、方法、および情報処理装置 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6770598B2 (ja) | 2019-01-23 | 2020-10-14 | 株式会社コロプラ | ゲームプログラム、方法、および情報処理装置 |
-
2022
- 2022-03-03 EP EP22779789.1A patent/EP4319144A1/en active Pending
- 2022-03-03 WO PCT/JP2022/009184 patent/WO2022209564A1/ja active Application Filing
- 2022-03-03 JP JP2023510714A patent/JP7449523B2/ja active Active
- 2022-03-03 CN CN202280005720.XA patent/CN116114252A/zh active Pending
-
2023
- 2023-06-28 US US18/215,201 patent/US20230368464A1/en active Pending
-
2024
- 2024-02-21 JP JP2024024798A patent/JP2024063062A/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH053485B2 (ja) | 1986-03-31 | 1993-01-14 | Shibaura Eng Works Ltd | |
WO2019088273A1 (ja) * | 2017-11-04 | 2019-05-09 | ナーブ株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
JP2021005237A (ja) * | 2019-06-26 | 2021-01-14 | 株式会社コロプラ | プログラム、情報処理方法、及び情報処理装置 |
JP2021020074A (ja) * | 2020-09-24 | 2021-02-18 | 株式会社コロプラ | ゲームプログラム、方法、および情報処理装置 |
Non-Patent Citations (1)
Title |
---|
MIRRATIV INC., MIRRATIV, 18 December 2019 (2019-12-18), Retrieved from the Internet <URL:https:/www.mirrativ.com> |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116320364A (zh) * | 2023-05-25 | 2023-06-23 | 四川中绳矩阵技术发展有限公司 | 一种基于多层显示的虚拟现实拍摄方法及显示方法 |
CN116320364B (zh) * | 2023-05-25 | 2023-08-01 | 四川中绳矩阵技术发展有限公司 | 一种基于多层显示的虚拟现实拍摄方法及显示方法 |
Also Published As
Publication number | Publication date |
---|---|
JP2024063062A (ja) | 2024-05-10 |
JP7449523B2 (ja) | 2024-03-14 |
JPWO2022209564A1 (ja) | 2022-10-06 |
US20230368464A1 (en) | 2023-11-16 |
EP4319144A1 (en) | 2024-02-07 |
CN116114252A (zh) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11962954B2 (en) | System and method for presenting virtual reality content to a user | |
US8808089B2 (en) | Projection of interactive game environment | |
JP6776400B1 (ja) | プログラム、方法、および情報端末装置 | |
JP6770598B2 (ja) | ゲームプログラム、方法、および情報処理装置 | |
JP2024063062A (ja) | 情報処理システム、情報処理方法、情報処理プログラム | |
JP7305599B2 (ja) | プログラム | |
US20220297006A1 (en) | Program, method, and terminal device | |
JP7388665B2 (ja) | 情報処理システム、情報処理方法、情報処理プログラム | |
JP6785325B2 (ja) | ゲームプログラム、方法、および情報処理装置 | |
JP6726322B1 (ja) | ゲームプログラム、方法、および情報処理装置 | |
JP2023143963A (ja) | プログラム、情報処理方法及び情報処理装置 | |
JP2022000218A (ja) | プログラム、方法、情報処理装置、およびシステム | |
JP2021010756A (ja) | プログラム、方法、および情報端末装置 | |
JP7357865B1 (ja) | プログラム、情報処理方法、及び情報処理装置 | |
JP7454166B2 (ja) | 情報処理システム、情報処理方法、及び記憶媒体 | |
JP7050884B1 (ja) | 情報処理システム、情報処理方法、情報処理プログラム | |
WO2022113335A1 (ja) | 方法、コンピュータ可読媒体、および情報処理装置 | |
WO2022113330A1 (ja) | 方法、コンピュータ可読媒体、および情報処理装置 | |
JP2024007548A (ja) | プログラム、情報処理方法、及び情報処理装置 | |
JP2021037302A (ja) | ゲームプログラム、方法、および情報処理装置 | |
JP2024070696A (ja) | 情報処理システム、情報処理方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22779789 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023510714 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022779789 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022779789 Country of ref document: EP Effective date: 20231030 |