WO2022255262A1 - コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム - Google Patents
コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム Download PDFInfo
- Publication number
- WO2022255262A1 WO2022255262A1 PCT/JP2022/021780 JP2022021780W WO2022255262A1 WO 2022255262 A1 WO2022255262 A1 WO 2022255262A1 JP 2022021780 W JP2022021780 W JP 2022021780W WO 2022255262 A1 WO2022255262 A1 WO 2022255262A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- avatar
- action
- user
- specific action
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 43
- 230000009471 action Effects 0.000 claims abstract description 193
- 238000012545 processing Methods 0.000 description 32
- 238000003860 storage Methods 0.000 description 30
- 230000008569 process Effects 0.000 description 29
- 230000033001 locomotion Effects 0.000 description 28
- 238000004891 communication Methods 0.000 description 22
- 230000000875 corresponding effect Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 17
- 230000005540 biological transmission Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 8
- 230000006399 behavior Effects 0.000 description 5
- 239000000463 material Substances 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 210000000988 bone and bone Anatomy 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 241001503485 Mammuthus Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000001755 vocal effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/70—Game security or game management aspects
- A63F13/79—Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/08—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
- G09B5/10—Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations all student stations being capable of presenting the same information simultaneously
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Definitions
- One aspect of the present disclosure relates to a content providing system, a content providing method, and a content providing program.
- Patent Document 1 discloses that a viewer of content using a virtual space (virtual three-dimensional virtual space) can add his/her own avatar object to the virtual space, and then reproduce the content. A mechanism is disclosed for displaying the added avatar object on other viewer terminals.
- the above mechanism when viewing content that has been viewed by another viewer in the past, it is possible to view content that includes the other viewer's avatar, so that the other viewer can It is possible to obtain a sense of solidarity that the content is viewed together.
- the above mechanism has room for further improvement in terms of improving convenience for users viewing content.
- An object of one aspect of the present disclosure is to provide a content providing system, a content providing method, and a content providing program that can effectively improve convenience for users viewing content.
- a content providing system includes at least one processor. At least one processor places in the virtual space a first avatar corresponding to a first user viewing predetermined content representing the virtual space, and is operated by the first user while the first user is viewing the content. generating action log information recording action information indicating the action of the first avatar and specific action information indicating the timing at which the first avatar performed a predetermined specific action and the content of the specific action; When the second user views the content after the first user views the content, the content is reproduced, and the motion of the first avatar is reproduced based on the motion information included in the motion log information. By referring to the included specific action information, a display object indicating the content of the specific action is arranged in association with the first avatar at the timing when the first avatar during reproduction performs the specific action.
- the action of the first avatar operated by the first user during viewing is reproduced for the second user who views the content after the first user who viewed the content in the past.
- This allows the second user to feel a sense of solidarity with the other user (first user) while viewing the content.
- a display object indicating the details of the specific action performed by the first avatar while viewing the content is arranged in association with the first avatar at the timing when the specific action is performed.
- the content of the specific action performed by the first avatar can be visually presented to the second user at an appropriate timing as reference information about actions to be taken while viewing the content.
- FIG. 2 is a diagram showing an example of a hardware configuration related to the content providing system of FIG. 1;
- FIG. 2 is a diagram showing an example of a functional configuration related to avatar recording processing of the content providing system of FIG. 1;
- FIG. 2 is a sequence diagram showing an example of the operation of avatar recording processing of the content providing system of FIG. 1;
- FIG. 4 is a diagram showing an example of a content image provided to the first user;
- FIG. It is a figure which shows an example of specific action of an avatar. It is a figure which shows an example of specific action of an avatar.
- 2 is a diagram showing an example of a functional configuration related to avatar reproduction processing of the content providing system of FIG. 1;
- FIG. 2 is a sequence diagram showing an example of the operation of avatar reproduction processing of the content providing system of FIG. 1;
- FIG. FIG. 4 is a diagram showing an example of a display object displayed in a content image;
- FIG. 4 is a diagram showing a display example of a first avatar and a third avatar displayed in a content image;
- a content providing system is a computer system that distributes content to users.
- Content is human-perceivable information provided by a computer or computer system.
- Electronic data representing content is called content data.
- the content expression format is not limited.
- Content may be represented by, for example, images (eg, photographs, videos, etc.), documents, sounds, music, or a combination of any two or more of these elements.
- Content can be used for various aspects of information transfer or communication.
- the content can be used for various occasions or purposes such as entertainment, news, education, medical care, games, chats, commercial transactions, lectures, seminars, and training.
- Distribution is the process of transmitting information to users via a communication network or broadcast network.
- the content providing system provides content to users (viewers) by transmitting content data to user terminals.
- content is provided by a distributor.
- a distributor is a person who tries to convey information to viewers, and is a sender of contents.
- a user (viewer) is a person who tries to obtain the information and is a content user.
- the content is expressed using at least an image that expresses the virtual space. That is, the content data includes at least a content image representing the content.
- a content image is an image from which a person can visually recognize some information.
- a content image may be a moving image (video) or a still image.
- the content image represents a virtual space in which virtual objects exist.
- a virtual object is an object that does not exist in the real world and is represented only on a computer system.
- a virtual object is represented by two-dimensional or three-dimensional computer graphics (CG) using image materials independent of actual images.
- the representation method of the virtual object is not limited.
- the virtual object may be represented using animation material, or may be represented as realistically based on a photographed image.
- a virtual space is a virtual two-dimensional or three-dimensional space represented by images displayed on a computer.
- a content image is, for example, an image showing a landscape seen from a virtual camera set in the virtual space.
- a virtual camera is a virtual viewpoint (virtual viewpoint) set in a virtual space so as to correspond to the line of sight of a user viewing a content image.
- the content image or virtual space may further include real objects, which are objects that actually exist in the real world.
- An example of a virtual object is a user's avatar.
- the avatar is represented by two-dimensional or three-dimensional computer graphics (CG) using image materials independent of the original image, rather than the person being photographed.
- CG computer graphics
- the avatar expression method is not limited.
- avatars may be represented using animation materials, or may be represented as realistically based on actual images.
- the position and orientation of the virtual camera described above can be set to match the viewpoint and line of sight of the avatar.
- the user is provided with a first-person-view content image. Thereby, the user can visually recognize the content image corresponding to the field of view from the viewpoint (virtual camera) of the avatar placed in the virtual space.
- users can experience augmented reality (AR), virtual reality (VR), or mixed reality (MR).
- AR augmented reality
- VR virtual reality
- MR mixed reality
- the content provision system may be used for time-shifting, which allows content to be viewed in a given period after real-time delivery.
- the content providing system may be used for on-demand distribution that allows content to be viewed at any timing.
- a content providing system distributes content expressed using content data generated and stored in the past.
- the expression "transmitting" data or information from a first computer to a second computer means transmission for finally delivering data or information to the second computer. That is, the phrase includes cases where another computer or communication device relays the data or information in its transmission.
- the content may be educational content, in which case the content data is educational content data.
- Educational content is, for example, content used by a teacher to give lessons to students.
- a teacher is someone who teaches a subject, such as an art, and a student is someone who receives that instruction.
- a teacher is an example of a distributor, and a student is an example of a viewer.
- a teacher may be a person with a teaching license or a person without a teaching license.
- Teaching means that a teacher teaches students academics, arts, etc.
- the age and affiliation of each teacher and student are not limited, and therefore the purpose and use scene of the educational content are also not limited.
- educational content may be used in various schools such as nursery schools, kindergartens, elementary schools, junior high schools, high schools, universities, graduate schools, vocational schools, preparatory schools, online schools, etc., and may be used in places or situations other than schools.
- educational content can be used for various purposes such as early childhood education, compulsory education, higher education, and lifelong learning.
- the educational content includes an avatar corresponding to the teacher or student, meaning that the avatar appears in at least some scenes of the educational content.
- FIG. 1 is a diagram showing an example of application of a content providing system 1 according to an embodiment.
- the content providing system 1 includes a server 10, user terminals 20 (user terminals 20A, 20B, 20C), a content database 30, and an operation log information database 40.
- the server 10 is a computer that distributes content data to the user terminals 20 .
- the server 10 is connected via a communication network N with at least one user terminal 20 .
- One user terminal 20 may be shared by multiple users, or one user terminal 20 may be prepared for each user.
- the server 10 is connected to at least three user terminals 20A, 20B, 20C.
- the server 10 is also connected to the content database 30 and the operation log information database 40 .
- the configuration of the communication network N is not limited.
- the communication network N may include the Internet, or may include an intranet.
- the user terminal 20 is a computer used by content viewers (that is, users who use content). In this embodiment, a user corresponds to a student who uses educational content.
- the user terminal 20 has a function of accessing the content providing system 1 (server 10) to receive and display content data.
- the type and configuration of the user terminal 20 are not limited.
- the user terminal 20 includes a mobile terminal such as a high-performance mobile phone (smartphone), a tablet terminal, a wearable terminal (e.g., head-mounted display (HMD), smart glasses, etc.), a laptop personal computer, and a mobile phone. can be configured.
- the user terminal 20 may be configured including a stationary terminal such as a desktop personal computer.
- the user terminal 20 may be configured by a combination of two or more types of terminals exemplified above.
- a user can use (view) content by operating the user terminal 20 and logging into the content providing system 1 .
- the user can have various experiences in the virtual space represented by the content via his/her own avatar.
- the content database 30 is a non-temporary storage medium or storage device that stores generated content data.
- the content database 30 can be said to be a library of existing content.
- Content data is stored in content database 30 by any computer, such as server 10 or another computer.
- Content data is stored in the content database 30 after being associated with a content ID that uniquely identifies the content.
- content data includes virtual space data, model data, and scenarios.
- Virtual space data is electronic data that indicates the virtual space that constitutes the content.
- the virtual space data may include information indicating the arrangement of individual virtual objects that make up the background, the position of the virtual camera, the position of the virtual light source, or the like.
- Model data is electronic data used to define the specifications of the virtual objects that make up the content.
- a specification of a virtual object refers to the conventions or methods for controlling the virtual object.
- the virtual object specification includes at least one of the virtual object's configuration (eg, shape and size), behavior, and audio.
- the data structure of the avatar model data is not limited and may be arbitrarily designed.
- the model data may include information about multiple joints and multiple bones that make up the avatar, graphic data representing the appearance design of the avatar, attributes of the avatar, and an avatar ID that is an identifier of the avatar. Examples of information about joints and bones include the three-dimensional coordinates of individual joints and combinations of adjacent joints (ie, bones).
- the configuration of the information is not limited to these and may be arbitrarily designed.
- An avatar's attributes are any information set to characterize the avatar, and may include, for example, nominal dimensions, voice quality, or personality.
- a scenario is electronic data that defines the behavior of individual virtual objects, virtual cameras, or virtual light sources over time in virtual space. It can be said that the scenario is information for determining the story of the content. Behavior of a virtual object is not limited to visually perceptible movement, and may include auditory perceivable sound production.
- the scenario includes motion data indicating when and how individual moving virtual objects will behave.
- Content data may include information about real objects.
- content data may include a photographed image of a real object. If the content data includes a physical object, the scenario may further define when and where the physical object should be displayed.
- the action log information database 40 is a non-temporary storage medium or storage device that stores action log information that records actions of avatars operated by users who viewed content in the past while they were watching content. Details of the operation log information will be described later.
- each database is not limited.
- at least one of the content database 30 and the operation log information database 40 may be provided in a computer system separate from the content providing system 1 or may be a component of the content providing system 1 .
- FIG. 2 is a diagram showing an example of a hardware configuration related to the content providing system 1. As shown in FIG. FIG. 2 shows a server computer 100 functioning as the server 10 and a terminal computer 200 functioning as the user terminal 20 .
- the server computer 100 includes a processor 101, a main storage section 102, an auxiliary storage section 103, and a communication section 104 as hardware components.
- the processor 101 is a computing device that executes an operating system and application programs. Examples of processors include a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit), but the type of processor 101 is not limited to these.
- processor 101 may be a combination of sensors and dedicated circuitry.
- the dedicated circuit may be a programmable circuit such as an FPGA (Field-Programmable Gate Array), or may be another type of circuit.
- the main storage unit 102 is a device that stores programs for realizing the server 10, calculation results output from the processor 101, and the like.
- the main storage unit 102 is composed of, for example, ROM (Read Only Memory) or RAM (Random Access Memory).
- the auxiliary storage unit 103 is generally a device capable of storing a larger amount of data than the main storage unit 102.
- the auxiliary storage unit 103 is composed of a non-volatile storage medium such as a hard disk or flash memory.
- the auxiliary storage unit 103 stores a server program P1 for causing the server computer 100 to function as the server 10 and various data.
- the auxiliary storage unit 103 may store data relating to at least one of a virtual object such as an avatar and a virtual space.
- a content providing program can be implemented as a server program P1.
- the communication unit 104 is a device that performs data communication with other computers via the communication network N.
- the communication unit 104 is configured by, for example, a network card or a wireless communication module.
- Each functional element of the server 10 is realized by causing the processor 101 or the main storage unit 102 to read the server program P1 and causing the processor 101 to execute the server program P1.
- the server program P1 includes codes for realizing each functional element of the server 10.
- FIG. The processor 101 operates the communication unit 104 according to the server program P1 to read and write data in the main storage unit 102 or the auxiliary storage unit 103 .
- Each functional element of the server 10 is implemented by such processing.
- the server 10 can be composed of one or more computers. When a plurality of computers are used, one server 10 is logically configured by connecting the plurality of computers to each other via a communication network.
- the terminal computer 200 includes a processor 201, a main storage unit 202, an auxiliary storage unit 203, a communication unit 204, an input interface 205, and an output interface 206 as hardware components.
- the processor 201 is an arithmetic device that executes an operating system and application programs.
- Processor 201 can be, for example, a CPU or GPU, but the type of processor 201 is not limited to these.
- the main storage unit 202 is a device that stores programs for realizing the user terminal 20, calculation results output from the processor 201, and the like.
- the main storage unit 202 is configured by, for example, ROM or RAM.
- the auxiliary storage unit 203 is generally a device capable of storing a larger amount of data than the main storage unit 202.
- the auxiliary storage unit 203 is configured by a non-volatile storage medium such as a hard disk or flash memory.
- the auxiliary storage unit 203 stores a client program P2 for causing the terminal computer 200 to function as the user terminal 20 and various data.
- the auxiliary storage unit 203 may store data relating to at least one of a virtual object such as an avatar and a virtual space.
- a content providing program can be implemented as a client program P2.
- the communication unit 204 is a device that performs data communication with other computers via the communication network N.
- the communication unit 204 is configured by, for example, a network card or a wireless communication module.
- the input interface 205 is a device that accepts data based on user's operations or actions.
- the input interface 205 is composed of at least one of a controller, keyboard, operation buttons, pointing device, microphone, sensor, and camera.
- a keyboard and operation buttons may be displayed on the touch panel.
- the type of input interface 205 is not limited, the data input to input interface 205 is also not limited.
- input interface 205 may accept data entered or selected by a keyboard, operating buttons, or pointing device.
- input interface 205 may accept voice data input by a microphone.
- the input interface 205 may accept image data (for example, video data or still image data) captured by a camera.
- the input interface 205 can detect non-verbal activity of the user (eg, gaze, movement of the user's head, body parts other than the user's head (eg, hands) detected by a motion capture function using a sensor or camera. etc.) may be accepted as motion data.
- non-verbal activity of the user eg, gaze, movement of the user's head, body parts other than the user's head (eg, hands) detected by a motion capture function using a sensor or camera. etc.
- the output interface 206 is a device that outputs data processed by the terminal computer 200 .
- the output interface 206 is configured by at least one of a monitor, touch panel, HMD, and speaker.
- Display devices such as monitors, touch panels, and HMDs display the processed data on their screens.
- the content image is output and displayed on the HMD.
- a speaker outputs the sound indicated by the processed audio data.
- Each functional element of the user terminal 20 is realized by causing the processor 201 or the main storage unit 202 to read the client program P2 and execute the client program P2.
- the client program P2 includes codes for realizing each functional element of the user terminal 20.
- FIG. The processor 201 operates the communication unit 204, the input interface 205, or the output interface 206 according to the client program P2, and reads and writes data in the main storage unit 202 or the auxiliary storage unit 203.
- FIG. Each functional element of the user terminal 20 is realized by this process.
- At least one of the server program P1 and the client program P2 may be provided after being fixedly recorded in a tangible recording medium such as a CD-ROM, DVD-ROM, or semiconductor memory.
- a tangible recording medium such as a CD-ROM, DVD-ROM, or semiconductor memory.
- at least one of these programs may be provided via a communication network as a data signal superimposed on a carrier wave. These programs may be provided separately or together.
- the content providing system 1 is mainly configured to be able to execute avatar recording processing and avatar reproducing processing.
- the avatar recording process is a process of recording the actions of the first avatar operated by the first user while the first user is viewing the content.
- the avatar reproduction process reproduces the content in the content image provided to the second user when the second user views the content after the avatar recording process has been performed for at least one first user. 2) reproduces the motion of the first avatar corresponding to at least one first user recorded by the avatar recording process.
- the avatar is a humanoid object (see FIGS. 10 and 11), but the form of the avatar is not limited to a specific form.
- FIG. 3 is a diagram showing an example of a functional configuration related to avatar recording processing of the content providing system 1. As shown in FIG. Here, the case where the motion of the first avatar corresponding to the first user who uses the user terminal 20A is recorded will be described as an example.
- the server 10 includes a receiving unit 11, a content transmitting unit 12, and a log generating unit 13 as functional elements related to avatar recording processing.
- the receiving unit 11 receives data signals transmitted from the user terminal 20 .
- the content transmission unit 12 transmits content data to the user terminal 20 in response to a request from the user.
- the log generation unit 13 generates action log information of the avatar based on the action information of the avatar acquired from the user terminal 20 and stores it in the action log information database 40 .
- the user terminal 20A includes a request unit 21, a reception unit 22, a display control unit 23, and an action information transmission unit 24 as functional elements related to avatar recording processing.
- the request unit 21 requests the server 10 to perform various controls related to content.
- the receiving unit 22 receives content data.
- the display control unit 23 processes the received content data and displays the content on the display device of the user terminal 20A.
- the action information transmitting unit 24 transmits to the server 10 action information indicating the action of the first avatar operated by the first user.
- FIG. 4 is a sequence diagram showing an example of operations related to avatar recording processing of the content providing system 1. As shown in FIG. 4,
- step S101 the request unit 21 of the user terminal 20A transmits a content request to the server 10.
- a content request is a data signal for requesting the server 10 to reproduce content (that is, to start viewing content).
- the content request is received by the receiving unit 11 of the server 10 .
- step S102 the content transmission unit 12 of the server 10 reads content data from the content database 30 in response to the content request from the user terminal 20A, and transmits the content data to the user terminal 20A.
- the content data is received by the receiving unit 22 of the user terminal 20A.
- the display control unit 23 of the user terminal 20A reproduces the content.
- the content delivered by the content providing system 1 is educational content showing a class scene.
- the content includes lecture data showing a video of a lesson scene by the teacher.
- Lecture data is, for example, three-dimensional (depth) video data.
- a virtual space represented by content includes a background such as a school classroom, and an area where lecture data (video) is displayed at a predetermined position in the virtual space.
- reproduction of content means reproduction of the lecture data in the virtual space.
- step S104 the display control unit 23 arranges the first avatar corresponding to the first user in the virtual space. For example, the display control unit 23 arranges the first avatar at a position in front of the lecture data display area (that is, at a position where the lecture data can be visually recognized) in a virtual space that simulates a classroom.
- step S105 the display control unit 23 presents to the first user based on the viewpoint of the first avatar placed in the virtual space (that is, the virtual viewpoint of the first user set in the virtual space).
- content image content video
- the user terminal 20A executes processing (rendering) for generating a content image for the first user based on the content data and the first user's virtual viewpoint. 10 may perform the rendering.
- FIG. 5 is a diagram showing an example of the content image IM1 provided to the first user.
- the content image IM1 includes part of the area 51 where the lecture data (video) is displayed (that is, the range included in the visual field from the first user's virtual viewpoint).
- the content image IM1 also includes a part (hand portion) of the first avatar A1 operated by the first user, and a tool object 52 (for example, a book imitating a book) that can be operated via the first avatar A1. object) is displayed.
- the first user operates the first avatar A1 by operating the user terminal 20A.
- the method by which the first user operates the first avatar A1 is not limited to a specific method.
- the first user may operate the first avatar A1 by operating the controller (an example of the input interface 205) of the user terminal 20A.
- the controller an example of the input interface 205
- the first user can make a gesture (that is, detected by the sensor
- the first avatar A1 may be operated by the user's hand movement).
- Examples of operations that can be performed on the avatar include an operation to move the avatar's hand in the virtual space, an operation to perform some processing on a predetermined object in the virtual space via the avatar's hand, and the like. be done.
- the operable part of the avatar is not limited to the hand part of the avatar. For example, if sensors are provided at a plurality of parts of the user's body, and the movement of each part can be sensed, the operable parts of the avatar can move according to the movement of the user's body. (that is, a part corresponding to each part of the user's body provided with a sensor).
- the action information transmission unit 24 of the user terminal 20A transmits to the server 10 action information indicating the action of the first avatar A1 operated in step S106.
- the motion information is associated with the reproduction position of the content (lecture data) (for example, the reproduction time when the beginning of the lecture data is set as the reference "0:00").
- the motion information of the first avatar A1 is the state of the first avatar A1 (for example, the position of each part constituting the first avatar A1) at each time point divided by predetermined time intervals from the start of playback of the content to the end of playback. coordinates, etc.). That is, the motion information of the first avatar A1 is information for reproducing the motion of the first avatar A1 from the start of reproduction of the content to the end of the reproduction.
- step S108 the log generation unit 13 of the server 10 generates action log information of the first avatar A1 based on the action information of the first avatar A1 received from the user terminal 20A (action information transmission unit 24).
- the action log information includes action information of the first avatar A1 as well as specific action information indicating the timing at which the first avatar A1 performed a predetermined specific action and the details of the specific action.
- the log generation unit 13 extracts specific actions from the action information of the first avatar A1, and generates specific action information for each extracted specific action. Then, the log generation unit 13 generates action log information including the action information of the first avatar A1 and the specific action information for each of the extracted specific actions.
- the specific action is an action of selecting a predetermined item object from among one or more item objects registered in advance as items used for learning in class, and generating the selected item object in the virtual space.
- the specific action is an action of selecting a predetermined item object from among one or more item objects registered in advance as items used for learning in class, and generating the selected item object in the virtual space.
- the first user aligns the hand of the first avatar A1 with the position of the tool object 52 and executes an operation of selecting the tool object 52 .
- a list of one or more (eight in this example) item objects 53 registered in advance as items to be used in class study is displayed.
- models of ancient creatures such as mammoths, ammonites, and pikaia are registered as item objects 53 .
- the first user for example, positions the hand of the first avatar A1 on the item object 53 to be selected, and performs an operation of selecting the item object 53 .
- FIG. 6 shows a list of one or more (eight in this example) item objects 53 registered in advance as items to be used in class study.
- models of ancient creatures such as mammoths, ammonites, and pikaia.
- the selected item object 53 (mammoth model in this example) is generated in front of the first avatar A1.
- the first user can, for example, freely move the item object 53 via the first avatar A1 and observe the item object 53 from various angles.
- the type of the item object 53 is not limited to the above example.
- Various types of item objects 53 can be used according to the content of the lesson. For example, when content (lecture data) shows a scene of a science (biology) class, experimental tools such as beakers, flasks, and microscopes may be registered as item objects 53 . Also, if the content shows a scene of a mathematics class, a ruler, a protractor, a compass, etc. may be registered as the item object 53 .
- the log generation unit 13 extracts the above specific action (that is, the action of selecting the tool object 52 and selecting the predetermined item object 53) from the action information of the first avatar A1. If so, the timing at which the specific action was performed (for example, the playback position of the content when the specific action was performed) and the details of the specific action (in this embodiment, information indicating the selected item object 53) to generate specific action information associated with.
- step S109 the log generation unit 13 stores the operation log information generated in step S107 in the operation log information database 40.
- the process of saving the action log information about the first avatar operated by the first user while viewing the content (that is, the avatar recording process) is completed.
- an avatar recording process can be executed for each of the plurality of first users who view the content at any timing.
- action log information for a plurality of first avatars is stored in the action log information database 40 .
- FIG. 8 is a diagram showing an example of a functional configuration related to avatar reproduction processing of the content providing system 1.
- the action log information of one or more first avatars is saved by the avatar recording process described above (that is, after one or more first users have viewed the content)
- the second user who uses the user terminal 20B A case where the user views content will be described as an example.
- the server 10 includes an avatar information transmission section 14 in addition to the reception section 11 and the content transmission section 12 described above, as functional elements related to avatar reproduction processing.
- the avatar information transmission unit 14 transmits to the user terminal 20 the action log information of one or more first users who have viewed content in the past.
- the avatar information transmission unit 14 also transmits the motion information of the third avatar corresponding to the third user to the user terminal 20. .
- the avatar information transmission unit 14 transmits the action information of the third avatar received from the user terminal 20C to the user terminal 20B.
- the user terminal 20B includes the above-described request unit 21, reception unit 22, and display control unit 23 as functional elements related to avatar reproduction processing.
- the requesting unit 21 and the receiving unit 22 have functions similar to those of the avatar recording process.
- the display control unit 23 executes processing specific to the avatar reproduction processing in addition to the processing described in the avatar recording processing.
- FIG. 9 is a sequence diagram showing an example of operations related to avatar reproduction processing of the content providing system 1. As shown in FIG. 9,
- step S201 the request unit 21 of the user terminal 20B transmits a content request to the server 10.
- a content request is a data signal for requesting the server 10 to reproduce content (that is, to start viewing content).
- the content request is received by the receiving unit 11 of the server 10 .
- step S202 the content transmission unit 12 of the server 10 reads content data from the content database 30 in response to the content request from the user terminal 20B, and transmits the content data to the user terminal 20B.
- the content data is received by the receiving unit 22 of the user terminal 20B.
- the avatar information transmission unit 14 of the server 10 selects an avatar to be reproduced. For example, consider a case where there are a plurality of first users who have viewed the same content as the content that the second user intends to view in the past, and the action log information regarding the plurality of first avatars is stored in the action log information database 40. . In this case, if all the first avatars are reproduced (displayed) in the content (virtual space) provided to the second user, many first avatars appear in the content image provided to the second user. There is a risk that the information displayed in the content image will become complicated. Therefore, the avatar information transmitting unit 14 selects a predetermined number (for example, three) of the first avatars whose action log information is stored in the action log information database 40 as reproduction targets. Select as
- the avatar information transmitting unit 14 selects a predetermined number of first avatars, giving priority to first avatars that are newer when the action log information is generated (that is, when the content is viewed). good too.
- the avatar information transmitting unit 14 gives priority to the first avatar with a large number of records of specific action information included in the action log information (that is, the number of times the specific action was performed while viewing the content), and gives priority to a predetermined number of times. may select the first avatar of
- step S204 the avatar information transmission unit 14 transmits the action log information of the predetermined number of first avatars selected in step S203 to the user terminal 20B.
- step S205 the server 10 (avatar information transmitting unit 14 as an example) acquires motion information of the third avatar corresponding to the third user from the user terminal 20C.
- step S206 the server 10 (avatar information transmitting unit 14 as an example) transmits motion information of the third avatar to the user terminal 20B.
- step S207 the display control unit 23 of the user terminal 20B reproduces the content.
- the processing of step S207 is the same as the processing of step S103.
- step S208 the display control unit 23 reproduces the action of each first avatar in the virtual space based on the action information included in the action log information of each first avatar received in step S204. That is, the display control unit 23 arranges each first avatar in the virtual space and moves each first avatar based on the action information included in the corresponding action log information.
- the position where each first avatar is arranged may be the position where each first avatar was actually arranged in the past. (a state in which they are placed overlapping in the same place) may occur. Therefore, the display control unit 23 sets each first avatar to the position where each first avatar was arranged in the past so that the avatars do not interfere with each other (for example, the avatars are spaced apart from each other by a predetermined distance or more). may be placed in a different position.
- step S ⁇ b>209 the display control unit 23 refers to the specific action information included in the action log information of each first avatar received in step S ⁇ b>204 , so that each first avatar during playback performs a specific action.
- a display object indicating the content of the specific action is arranged in association with each first avatar. For example, when a certain first user viewed content in the past, the first user operated the first avatar to perform a specific action at the content playback position "30:00" (30 minutes after the start of playback). think about. In this case, when the second user views the content, when the same content playback position (in this example, "30:00") comes, the display control unit 23 displays the content of the specific action of the first avatar. is placed in association with the first avatar.
- step S210 the display control unit 23 displays the virtual The third avatar is arranged in space, and the third avatar is moved based on the motion information of the third avatar.
- the motion information of the third avatar is periodically transmitted from the user terminal 20C to the user terminal 20B via the server 10 at predetermined time intervals.
- step S ⁇ b>211 the display control unit 23 displays the second image based on the viewpoint of the second avatar corresponding to the second user placed in the virtual space (that is, the virtual viewpoint of the second user set in the virtual space). 2. Generate a content image (content video) to present to the user. Note that in the present embodiment, the user terminal 20B (display control unit 23) executes processing (rendering) for generating a content image for the second user based on the content data and the second user's virtual viewpoint. 10 may perform the rendering.
- the display control unit 23 may switch display and non-display of the first avatar based on an instruction operation from the second user while the second user is viewing the content. For example, when receiving an instruction to hide the first avatar by user operation via the controller of the user terminal 20B, the display control unit 23 switches so that the first avatar being reproduced is not displayed in the content image. good too. Further, when receiving an instruction to display the first avatar once hidden, the display control unit 23 may switch to display the first avatar in the content image. In addition, when a plurality of first avatars are selected as reproduction targets, switching between display/non-display of the first avatars may be made possible for each first avatar, or for all first avatars. It may be possible to operate collectively.
- FIG. 10 is a diagram showing an example of the display object 54 displayed in the content image IM2 provided to the second user.
- a display object 54 showing the content of a specific action generated in the virtual space by the first avatar A1 selecting an item object 53 showing a model of Pikaia is arranged in association with the first avatar A1.
- the display object 54 is placed near the first avatar A1 (in this example, the space in front of the head of the first avatar A1).
- the display object 54 can include text information 541 indicating the selected item object 53 and an illustration image 542 indicating the selected item object 53 . Note that an actual image such as a photograph may be used instead of the illustration image 542 .
- the first avatar A1 can perform specific actions during content viewing. It is not possible to ascertain what action was taken. That is, it is impossible to know which item object 53 has been selected by operating the tool object 52 (see FIG. 5) via the first avatar A1.
- the display object 54 indicating the details of the specific action is displayed by the first avatar A1.
- the second user can visually grasp the content of the specific action performed by the first user (first avatar A1) at appropriate timing.
- FIG. 11 is a diagram showing a display example of the first avatar A1 and the third avatar A3 displayed in the content image IM2.
- the display control unit 23 displays the third avatar A3 corresponding to the third user who is viewing the content at the same time as the second user as the first avatar A3 corresponding to the first user who has viewed the content in the past.
- the avatar A1 (the avatar being reproduced based on the action log information) may be displayed in a different display mode.
- the display control unit 23 arranges an inverted triangular mark object M in the space above the third avatar A3.
- the second user can determine whether each avatar placed in the virtual space is the first avatar A1 being reproduced, or the third user corresponding to the third user playing the content in real time. 3 It is possible to accurately grasp whether the avatar is A3.
- the display mode for distinguishing the third avatar A3 from the first avatar A1 is not limited to the example shown in FIG.
- the display control unit 23 may display the third avatar A3 in a different color (eg, brighter color) than the first avatar A1, or may display the first avatar A1 translucent.
- the content providing system includes at least one processor. At least one processor places in the virtual space a first avatar corresponding to a first user viewing predetermined content representing the virtual space, and is operated by the first user while the first user is viewing the content. generating action log information recording action information indicating the action of the first avatar and specific action information indicating the timing at which the first avatar performed a predetermined specific action and the content of the specific action; When the second user views the content after the first user views the content, the content is reproduced, and the motion of the first avatar is reproduced based on the motion information included in the motion log information. By referring to the included specific action information, a display object indicating the content of the specific action is arranged in association with the first avatar at the timing when the first avatar during reproduction performs the specific action.
- a content providing method includes the steps of placing a first avatar corresponding to a first user viewing predetermined content representing a virtual space in the virtual space; Recording action information indicating the action of the first avatar operated by the first user while the first user is in the avatar, and specific action information indicating the timing when the first avatar performs a predetermined specific action and the content of the specific action a step of generating action log information obtained by the first user; and reproducing the content when the second user views the content after the first user views the content, and generating the first avatar based on the action information included in the action log information. and referring to the specific action information included in the action log information, the display object indicating the content of the specific action is displayed on the first avatar at the timing when the first avatar performs the specific action during reproduction. and arranging in association.
- a content providing program includes steps of arranging a first avatar corresponding to a first user viewing predetermined content representing a virtual space in the virtual space; Recording action information indicating the action of the first avatar operated by the first user while the first user is in the avatar, and specific action information indicating the timing when the first avatar performs a predetermined specific action and the content of the specific action a step of generating action log information obtained by the first user; and reproducing the content when the second user views the content after the first user views the content, and generating the first avatar based on the action information included in the action log information. and referring to the specific action information included in the action log information, the display object indicating the content of the specific action is displayed on the first avatar at the timing when the first avatar performs the specific action during reproduction. causing a computer to perform the step of arranging in association;
- the action of the first avatar A1 operated by the first user during viewing is reproduced for the second user who views the content after the first user who viewed the content in the past.
- This allows the second user to feel a sense of solidarity with the other user (first user) while viewing the content.
- a display object 54 indicating the details of the specific action performed by the first avatar A1 while viewing the content is arranged in association with the first avatar A1 at the timing when the specific action is performed.
- the content of the specific action performed by the first avatar A1 can be visually presented to the second user at an appropriate timing as reference information about actions to be taken while viewing the content.
- it is possible to effectively improve the convenience of the user (second user) who views the content.
- the content may be educational content showing a lesson scene, and the specific action is selected from among one or more item objects registered in advance as items used for learning the lesson. It may be an operation for selecting a predetermined item object and generating the selected item object in the virtual space.
- the second user refers to the behavior of the other user (first user) (that is, the specific behavior of the first avatar A1) in accordance with the progress of the lesson in the virtual space, It becomes possible to generate the item object 53 in the virtual space. As a result, it is possible to effectively improve the efficiency of learning using the virtual space.
- the display object may include an illustration image showing the item object. According to the above configuration, the content of the item object 53 taken out by the first avatar A1 can be visually and easily grasped by the illustration image 542. FIG.
- the at least one processor when the second user is viewing the content, the at least one processor, if there is a third user who is viewing the content at the same time as the second user, instructs the third user to A corresponding third avatar may be displayed in a display mode different from that of the first avatar.
- the second user can easily communicate with the third user who is viewing the content in real time.
- At least one processor switches display and non-display of the first avatar based on an instruction operation from the second user while the second user is viewing the content. may If the number of first avatars A1 displayed in the virtual space is too large, the information displayed in the content image IM2 provided to the second user may become complicated. According to the above configuration, in such a case, the second user can freely switch the first avatar to non-display.
- At least one processor generates action log information of a plurality of first avatars corresponding to a plurality of first users when the second user views the content.
- a predetermined number of first avatars may be selected as reproduction targets. According to the above configuration, by suppressing the number of first avatars A1 displayed in the virtual space to a certain number (predetermined number) or less, the information displayed in the content image IM2 provided to the second user becomes complicated.
- the at least one processor may select a predetermined number of first avatars, giving priority to the first avatars whose action log information is newer.
- the reference image provided to the second user is displayed. The quality of information (the specific action of the first avatar A1) can be improved.
- At least one processor selects a predetermined number of first avatars, giving priority to first avatars having a large number of records of specific action information included in action log information.
- a first user corresponding to a first avatar with a large number of recordings of specific action information is likely to be a user who actively studied while viewing content.
- By preferentially selecting the first avatar A1 of such a user as a reproduction target it is possible to increase the amount of reference information (the specific action of the first avatar A1) provided to the second user.
- the server 10 described above may be executed by the user terminal 20.
- some of the functions of the user terminal 20 described above may be executed by the server 10 .
- the display control unit 23 of the user terminal 20 mainly performs content reproduction, avatar reproduction, and content image generation, but these processes may be executed by the server 10. .
- the user terminal 20 receives from the server 10 the content image processed and generated by the server 10, and only needs to perform display processing for displaying the received content image on the display.
- the expression “at least one processor executes the first process, the second process, . . .” is a concept including the case where the executing subject (that is, the processor) of n processes from process 1 to process n changes in the middle. That is, this expression is a concept that includes both the case where all of the n processes are executed by the same processor and the case where the processors are changed according to an arbitrary policy in the n processes.
- the processing procedure of the method executed by at least one processor is not limited to the examples in the above embodiments. For example, some of the steps (processes) described above may be omitted, or the steps may be performed in a different order. Also, any two or more of the steps described above may be combined, and some of the steps may be modified or deleted. Alternatively, other steps may be performed in addition to the above steps.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Human Resources & Organizations (AREA)
- Primary Health Care (AREA)
- Economics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
実施形態に係るコンテンツ提供システムは、ユーザに向けてコンテンツを配信するコンピュータシステムである。コンテンツとは、コンピュータ又はコンピュータシステムによって提供され、人が認識可能な情報である。コンテンツを示す電子データのことをコンテンツデータという。コンテンツの表現形式は限定されない。コンテンツは、例えば、画像(例えば、写真、映像等)、文書、音声、音楽、又はこれらの中の任意の2以上の要素の組合せによって表現されてもよい。コンテンツは、様々な態様の情報伝達又はコミュニケーションのために用いることができる。コンテンツは、例えば、エンターテインメント、ニュース、教育、医療、ゲーム、チャット、商取引、講演、セミナー、研修等の様々な場面又は目的で利用され得る。配信とは、通信ネットワーク又は放送ネットワークを経由して情報をユーザに向けて送信する処理である。
図1は、実施形態に係るコンテンツ提供システム1の適用の一例を示す図である。本実施形態では、コンテンツ提供システム1は、サーバ10と、ユーザ端末20(ユーザ端末20A,20B,20C)と、コンテンツデータベース30と、動作ログ情報データベース40と、を備える。
図2は、コンテンツ提供システム1に関連するハードウェア構成の一例を示す図である。図2は、サーバ10として機能するサーバコンピュータ100と、ユーザ端末20として機能する端末コンピュータ200とを示す。
コンテンツ提供システム1は、主に、アバター記録処理とアバター再生処理とを実行可能に構成されている。
図3は、コンテンツ提供システム1のアバター記録処理に関連する機能構成の一例を示す図である。ここでは、ユーザ端末20Aを利用する第1ユーザに対応する第1アバターの動作が記録される場合を例として説明を行う。
図4を参照して、アバター記録処理に関連するコンテンツ提供システム1の動作を説明すると共に、本実施形態に係るコンテンツ提供方法の一部について説明する。図4は、コンテンツ提供システム1のアバター記録処理に関連する動作の一例を示すシーケンス図である。
図8は、コンテンツ提供システム1のアバター再生処理に関連する機能構成の一例を示す図である。ここでは、上述したアバター記録処理によって一以上の第1アバターの動作ログ情報が保存された後(すなわち、一以上の第1ユーザがコンテンツを視聴した後)に、ユーザ端末20Bを利用する第2ユーザがコンテンツを視聴する場合を例として説明を行う。
図9を参照して、アバター再生処理に関連するコンテンツ提供システム1の動作を説明すると共に、本実施形態に係るコンテンツ提供方法の一部について説明する。図9は、コンテンツ提供システム1のアバター再生処理に関連する動作の一例を示すシーケンス図である。
以上説明したように、本開示の一側面に係るコンテンツ提供システムは、少なくとも一つのプロセッサを備える。少なくとも一つのプロセッサは、仮想空間を表す所定のコンテンツを視聴する第1ユーザに対応する第1アバターを仮想空間内に配置し、第1ユーザがコンテンツを視聴している間に第1ユーザにより操作された第1アバターの動作を示す動作情報と、第1アバターが予め定められた特定動作を行ったタイミング及び当該特定動作の内容を示す特定動作情報と、を記録した動作ログ情報を生成し、第1ユーザがコンテンツを視聴した後に第2ユーザがコンテンツを視聴する際に、コンテンツを再生すると共に、動作ログ情報に含まれる動作情報に基づいて第1アバターの動作を再生し、動作ログ情報に含まれる特定動作情報を参照することにより、再生中の第1アバターが特定動作を行うタイミングに、特定動作の内容を示す表示オブジェクトを第1アバターに関連付けて配置する。
以上、本開示について、実施形態に基づいて詳細に説明した。しかし、本開示は、上記実施形態に限定されない。本開示は、その要旨を逸脱しない範囲で様々な変形が可能である。
Claims (10)
- 少なくとも一つのプロセッサを備え、
前記少なくとも一つのプロセッサは、
仮想空間を表す所定のコンテンツを視聴する第1ユーザに対応する第1アバターを前記仮想空間内に配置し、
前記第1ユーザが前記コンテンツを視聴している間に前記第1ユーザにより操作された前記第1アバターの動作を示す動作情報と、前記第1アバターが予め定められた特定動作を行ったタイミング及び当該特定動作の内容を示す特定動作情報と、を記録した動作ログ情報を生成し、
前記第1ユーザが前記コンテンツを視聴した後に第2ユーザが前記コンテンツを視聴する際に、前記コンテンツを再生すると共に、前記動作ログ情報に含まれる前記動作情報に基づいて前記第1アバターの動作を再生し、
前記動作ログ情報に含まれる前記特定動作情報を参照することにより、再生中の前記第1アバターが前記特定動作を行うタイミングに、前記特定動作の内容を示す表示オブジェクトを前記第1アバターに関連付けて配置する、
コンテンツ提供システム。 - 前記コンテンツは、授業の場面を示す教育用コンテンツであり、
前記特定動作は、前記授業の学習に用いるアイテムとして予め登録された一以上のアイテムオブジェクトのうちから所定のアイテムオブジェクトを選択し、選択された前記アイテムオブジェクトを前記仮想空間内に生成するための動作である、
請求項1に記載のコンテンツ提供システム。 - 前記表示オブジェクトは、前記アイテムオブジェクトを示す画像を含む、
請求項2に記載のコンテンツ提供システム。 - 前記少なくとも一つのプロセッサは、前記第2ユーザが前記コンテンツを視聴する際に、前記第2ユーザと同時に前記コンテンツを視聴中の第3ユーザが存在する場合に、前記第3ユーザに対応する第3アバターを前記第1アバターとは異なる表示態様で表示する、
請求項1~3のいずれか一項に記載のコンテンツ提供システム。 - 前記少なくとも一つのプロセッサは、前記第2ユーザが前記コンテンツを視聴している間に、前記第2ユーザからの指示操作に基づいて、前記第1アバターの表示及び非表示を切り替える、
請求項1~4のいずれか一項に記載のコンテンツ提供システム。 - 前記少なくとも一つのプロセッサは、前記第2ユーザが前記コンテンツを視聴する際に、複数の前記第1ユーザに対応する複数の前記第1アバターの前記動作ログ情報が生成されている場合には、予め定められた数の前記第1アバターを再生対象として選定する、
請求項1~5のいずれか一項に記載のコンテンツ提供システム。 - 前記少なくとも一つのプロセッサは、前記動作ログ情報が生成された時点が新しい前記第1アバターを優先して、前記予め定められた数の前記第1アバターを選定する、
請求項6に記載のコンテンツ提供システム。 - 前記少なくとも一つのプロセッサは、前記動作ログ情報に含まれる前記特定動作情報の記録数が多い前記第1アバターを優先して、前記予め定められた数の前記第1アバターを選定する、
請求項6に記載のコンテンツ提供システム。 - 仮想空間を表す所定のコンテンツを視聴する第1ユーザに対応する第1アバターを前記仮想空間内に配置するステップと、
前記第1ユーザが前記コンテンツを視聴している間に前記第1ユーザにより操作された前記第1アバターの動作を示す動作情報と、前記第1アバターが予め定められた特定動作を行ったタイミング及び当該特定動作の内容を示す特定動作情報と、を記録した動作ログ情報を生成するステップと、
前記第1ユーザが前記コンテンツを視聴した後に第2ユーザが前記コンテンツを視聴する際に、前記コンテンツを再生すると共に、前記動作ログ情報に含まれる前記動作情報に基づいて前記第1アバターの動作を再生するステップと、
前記動作ログ情報に含まれる前記特定動作情報を参照することにより、再生中の前記第1アバターが前記特定動作を行うタイミングに、前記特定動作の内容を示す表示オブジェクトを前記第1アバターに関連付けて配置するステップと、
を含むコンテンツ提供方法。 - 仮想空間を表す所定のコンテンツを視聴する第1ユーザに対応する第1アバターを前記仮想空間内に配置するステップと、
前記第1ユーザが前記コンテンツを視聴している間に前記第1ユーザにより操作された前記第1アバターの動作を示す動作情報と、前記第1アバターが予め定められた特定動作を行ったタイミング及び当該特定動作の内容を示す特定動作情報と、を記録した動作ログ情報を生成するステップと、
前記第1ユーザが前記コンテンツを視聴した後に第2ユーザが前記コンテンツを視聴する際に、前記コンテンツを再生すると共に、前記動作ログ情報に含まれる前記動作情報に基づいて前記第1アバターの動作を再生するステップと、
前記動作ログ情報に含まれる前記特定動作情報を参照することにより、再生中の前記第1アバターが前記特定動作を行うタイミングに、前記特定動作の内容を示す表示オブジェクトを前記第1アバターに関連付けて配置するステップと、
をコンピュータに実行させるコンテンツ提供プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/043,553 US20240096227A1 (en) | 2021-05-31 | 2022-05-27 | Content provision system, content provision method, and content provision program |
CN202280005589.7A CN116635899A (zh) | 2021-05-31 | 2022-05-27 | 内容提供系统、内容提供方法以及内容提供程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-091499 | 2021-05-31 | ||
JP2021091499A JP7047168B1 (ja) | 2021-05-31 | 2021-05-31 | コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022255262A1 true WO2022255262A1 (ja) | 2022-12-08 |
Family
ID=81256652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/021780 WO2022255262A1 (ja) | 2021-05-31 | 2022-05-27 | コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240096227A1 (ja) |
JP (2) | JP7047168B1 (ja) |
CN (1) | CN116635899A (ja) |
WO (1) | WO2022255262A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6793807B1 (ja) * | 2019-12-26 | 2020-12-02 | 株式会社ドワンゴ | プログラム、情報処理装置及び方法 |
JP7548644B1 (ja) | 2024-02-07 | 2024-09-10 | 株式会社Hinichijo | 通信教育システム、通信教育提供方法及び通信教育提供プログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012226669A (ja) * | 2011-04-21 | 2012-11-15 | Nippon Hoso Kyokai <Nhk> | 参加型コンテンツ管理装置、視聴端末、参加型コンテンツ管理プログラム、及び視聴プログラム |
JP2018195177A (ja) * | 2017-05-19 | 2018-12-06 | 株式会社コロプラ | 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム |
JP2020017242A (ja) * | 2018-07-25 | 2020-01-30 | 株式会社バーチャルキャスト | 3次元コンテンツ配信システム、3次元コンテンツ配信方法、コンピュータプログラム |
JP2021005052A (ja) * | 2019-06-27 | 2021-01-14 | 株式会社ドワンゴ | 授業コンテンツの配信方法、授業コンテンツの配信システム、端末及びプログラム |
-
2021
- 2021-05-31 JP JP2021091499A patent/JP7047168B1/ja active Active
-
2022
- 2022-03-23 JP JP2022046671A patent/JP2022184724A/ja active Pending
- 2022-05-27 US US18/043,553 patent/US20240096227A1/en active Pending
- 2022-05-27 CN CN202280005589.7A patent/CN116635899A/zh active Pending
- 2022-05-27 WO PCT/JP2022/021780 patent/WO2022255262A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012226669A (ja) * | 2011-04-21 | 2012-11-15 | Nippon Hoso Kyokai <Nhk> | 参加型コンテンツ管理装置、視聴端末、参加型コンテンツ管理プログラム、及び視聴プログラム |
JP2018195177A (ja) * | 2017-05-19 | 2018-12-06 | 株式会社コロプラ | 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム |
JP2020017242A (ja) * | 2018-07-25 | 2020-01-30 | 株式会社バーチャルキャスト | 3次元コンテンツ配信システム、3次元コンテンツ配信方法、コンピュータプログラム |
JP2021005052A (ja) * | 2019-06-27 | 2021-01-14 | 株式会社ドワンゴ | 授業コンテンツの配信方法、授業コンテンツの配信システム、端末及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP2022183944A (ja) | 2022-12-13 |
US20240096227A1 (en) | 2024-03-21 |
JP2022184724A (ja) | 2022-12-13 |
CN116635899A (zh) | 2023-08-22 |
JP7047168B1 (ja) | 2022-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3212833U (ja) | 対話型教育支援システム | |
WO2022255262A1 (ja) | コンテンツ提供システム、コンテンツ提供方法、及びコンテンツ提供プログラム | |
JP6683864B1 (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
WO2021106803A1 (ja) | 授業システム、視聴端末、情報処理方法及びプログラム | |
US20220360827A1 (en) | Content distribution system, content distribution method, and content distribution program | |
JP2021105707A (ja) | プログラム、情報処理装置及び方法 | |
JP7465736B2 (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
JP7465737B2 (ja) | 授業システム、視聴端末、情報処理方法及びプログラム | |
JP2023164439A (ja) | 授業コンテンツの配信方法、授業コンテンツの配信システム、端末及びプログラム | |
JP6892478B2 (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
An et al. | Trends and effects of learning through AR-based education in S-Korea | |
JP2021009351A (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
JP6864041B2 (ja) | 情報記憶方法および情報記憶システム | |
JP7011746B1 (ja) | コンテンツ配信システム、コンテンツ配信方法、及びコンテンツ配信プログラム | |
JP6733027B1 (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
JP7548645B1 (ja) | 通信教育システム、通信教育提供方法及び通信教育提供プログラム | |
JP7548644B1 (ja) | 通信教育システム、通信教育提供方法及び通信教育提供プログラム | |
Aurelia et al. | A survey on mobile augmented reality based interactive, collaborative and location based storytelling | |
JP2021009348A (ja) | コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム | |
US20240104870A1 (en) | AR Interactions and Experiences | |
Lazoryshynets et al. | The project formation of virtual graphic images in applications for distance education systems | |
JP2020167654A (ja) | コンテンツ配信システム、コンテンツ配信方法、およびコンテンツ配信プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22816012 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280005589.7 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18043553 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22816012 Country of ref document: EP Kind code of ref document: A1 |