WO2017029918A1 - System, method and program for displaying moving image with specific field of view - Google Patents

System, method and program for displaying moving image with specific field of view Download PDF

Info

Publication number
WO2017029918A1
WO2017029918A1 PCT/JP2016/071040 JP2016071040W WO2017029918A1 WO 2017029918 A1 WO2017029918 A1 WO 2017029918A1 JP 2016071040 W JP2016071040 W JP 2016071040W WO 2017029918 A1 WO2017029918 A1 WO 2017029918A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
view
field
comment
moving image
Prior art date
Application number
PCT/JP2016/071040
Other languages
French (fr)
Japanese (ja)
Inventor
豪放 小倉
Original Assignee
株式会社ディー・エヌ・エー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ディー・エヌ・エー filed Critical 株式会社ディー・エヌ・エー
Publication of WO2017029918A1 publication Critical patent/WO2017029918A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Definitions

  • the present invention relates to a system, method, and program for displaying a moving image with a specific field of view.
  • a video that can have different fields of view among users such as a 360-degree video
  • the target being viewed may vary depending on the field of view. Therefore, for example, it is difficult for another user who is viewing a video with a field of view different from the field of view of the user who provided the comment to sympathize with the content of the comment because the target viewed is different from the user who provided the comment. . If the content of the comment cannot be sympathized, the activation of communication is also limited. Therefore, it is desirable to appropriately display information such as a comment in a moving image that may have different fields of view among users.
  • An embodiment of the present invention has an object to appropriately display information such as comments input in moving images that may have different fields of view among users. Other objects of the embodiments of the present invention will become apparent by referring to the entire specification.
  • a system is a system that displays a moving image in a specific field of view, and includes one or more computer processors, and the one or more computer processors execute readable instructions.
  • a specific video that is configured as a video having a wide-angle field of view and a virtual space is associated with the entire field of view in each terminal device of a plurality of users including the first and second users, A step of displaying each field of view of the plurality of users, and a first position in the virtual space included in the field of view of the first user when first input information is received from the first user. And specifying the first input information at the first position, and according to the arrangement of the first input information, the second user's end including the first position in the field of view The device executes a step of displaying the first input information.
  • the specific moving image may be configured as a moving image having a visual field of 360 degrees in at least the horizontal direction, and the virtual space may be configured as an inner surface of a virtual sphere.
  • a moving image may be referred to as a “360-degree moving image”, and the vertical visual field is, for example, in the range of 180-360 degrees.
  • a method is a method for displaying a moving image with a specific field of view, which is executed by one or a plurality of computers, and each terminal device of a plurality of users including a first user and a second user. And displaying a specific moving image having a wide-angle visual field and a virtual space associated with the entire visual field in each visual field of the plurality of users; Identifying a first position in the virtual space included in the field of view of the first user when receiving the first input information, and arranging the first input information at the first position; Displaying the first input information on the terminal device of the second user including the first position in the field of view according to the arrangement of the first input information.
  • a program according to an embodiment of the present invention is a program for displaying a moving image in a specific field of view, and is executed on the one or more computers in accordance with the first and the first computers.
  • Each terminal device of a plurality of users including the second user is provided with a specific moving image that is configured as a moving image having a wide-angle field of view and has a virtual space associated with the entire field of view of each of the plurality of users.
  • Various embodiments of the present invention can appropriately display information such as comments input in moving images that can have different fields of view among users.
  • FIG. 1 is a configuration diagram schematically showing a configuration of a network including a system 1 according to an embodiment of the present invention.
  • the block diagram which shows roughly the function which the system 1 (server 10 and the terminal device 30) in one Embodiment has.
  • the figure for demonstrating the user's visual field in one Embodiment The figure which shows an example of the 1st moving image reproduction screen 60 in one Embodiment.
  • the flowchart which shows an example of the comment arrangement
  • FIG. 1 is a configuration diagram schematically showing the configuration of a network including a system 1 according to an embodiment of the present invention.
  • the system 1 in one embodiment includes a server 10 and a plurality of terminal devices 30 that are communicably connected to the server 10 via a communication network 40 such as the Internet, as illustrated.
  • the server 10 provides a moving image distribution service that distributes various moving images to the terminal device 30.
  • the moving image distributed in the moving image distribution service in the embodiment includes a real-time moving image (live moving image) provided by the moving image providing apparatus 20.
  • the server 10 in one embodiment is configured as a general computer, and as illustrated, a CPU (computer processor) 11, a main memory 12, a user I / F 13, a communication I / F 14, and storage (storage). Device) 15, and these components are electrically connected to each other via a bus.
  • the CPU 11 loads an operating system and various other programs from the storage 15 into the main memory 12 and executes instructions included in the loaded programs.
  • the main memory 12 is used for storing a program executed by the CPU 11, and is configured by a DRAM or the like, for example.
  • the server 10 in one embodiment may be configured using a plurality of computers each having a hardware configuration as described above.
  • the user I / F 13 includes, for example, an information input device such as a keyboard and a mouse that accepts an operator's input, and an information output device such as a liquid crystal display that outputs a calculation result of the CPU 11.
  • the communication I / F 14 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and can communicate with the video providing device 20 and the terminal device 30 via the communication network 40. Composed.
  • the storage 15 is composed of, for example, a magnetic disk drive, and stores various programs such as a control program for providing a moving image distribution service.
  • the storage 15 can also store various data for providing the moving image distribution service.
  • Various data that can be stored in the storage 15 may be stored in a database server or the like that is physically separate from the server 10 that is communicably connected to the server 10.
  • the server 10 also functions as a web server that manages a website composed of a plurality of web pages in a hierarchical structure, and provides a video distribution service to the user of the terminal device 30 through such a website. obtain.
  • the storage 15 can also store HTML data corresponding to this web page. HTML data is associated with various image data, and various programs described in a script language such as JavaScript (registered trademark) can be embedded.
  • the server 10 can provide a video distribution service via an application executed in an execution environment other than the web browser in the terminal device 30.
  • Such applications can also be stored in the storage 15.
  • This application is created using a programming language such as Objective-C or Java (registered trademark).
  • the application stored in the storage 15 is distributed to the terminal device 30 in response to the distribution request.
  • the terminal device 30 can also download such an application from a server other than the server 10 (such as a server providing an application market).
  • the server 10 can manage the website for providing the video distribution service and distribute the web page (HTML data) constituting the website in response to a request from the terminal device 30.
  • the server 10 may be an application executed in the terminal device 30 in place of or in addition to the provision of the moving image distribution service using such a web page (web browser).
  • the video distribution service can be provided based on the communication. Regardless of which aspect of the service is provided, the server 10 can transmit and receive various data (including data necessary for screen display) necessary for providing the video distribution service to and from the terminal device 30.
  • the server 10 can store various data for each identification information (for example, user ID) for identifying each user, and can manage the provision status of the video distribution service for each user.
  • the server 10 may have a function of performing user authentication processing, billing processing, and the like.
  • the moving image providing apparatus 20 is configured as a general computer, and as illustrated in FIG. 1, a CPU (computer processor) 21, a main memory 22, a user I / F 23, a communication I / F 24, and the like. , A storage (storage device) 25 and an ultra-wide-angle camera 26, and these components are electrically connected to each other via a bus.
  • the CPU 21 loads an operating system and various other programs from the storage 25 to the main memory 22 and executes instructions included in the loaded programs.
  • the main memory 22 is used for storing a program executed by the CPU 21 and is configured by, for example, a DRAM or the like.
  • the user I / F 23 includes, for example, an information input device that receives an operator input and an information output device that outputs a calculation result of the CPU 21.
  • the communication I / F 24 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and is configured to be able to communicate with the server 10 and the terminal device 30 via the communication network 40.
  • the ultra-wide-angle camera 26 has a built-in microphone and is configured to capture an ultra-wide-angle image via an ultra-wide-angle lens or a plurality of lenses.
  • the super wide-angle camera 26 is configured as a 360-degree camera having a 360-degree field of view in the horizontal direction and a field-of-view range of 180-360 degrees in the vertical direction.
  • 20 is configured to transmit a 360-degree moving image having a substantially omnidirectional field of view photographed through the super wide-angle camera 26 to the server 10 in real time.
  • the terminal device 30 is an arbitrary information processing device that displays a web page of a website provided by the server 10 on a web browser and implements an execution environment for executing an application.
  • Wearable devices eg, head-mounted displays
  • personal computers e.g., Apple MacBook Air, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, Samsung Galaxy Tabs, etc.
  • the terminal device 30 is configured as a general computer, and as shown in FIG. 1, a CPU (computer processor) 31, a main memory 32, a user I / F 33, a communication I / F 34, a storage (storage device) ) 35 and various sensors 36, each of which is electrically connected to each other via a bus.
  • a CPU computer processor
  • main memory main memory
  • user I / F 33 main memory
  • communication I / F 34 communication I / F 34
  • storage (storage device) ) 35 various sensors 36, each of which is electrically connected to each other via a bus.
  • the CPU 31 loads an operating system and various other programs from the storage 35 to the main memory 32 and executes instructions included in the loaded programs.
  • the main memory 32 is used for storing a program executed by the CPU 31, and is configured by, for example, a DRAM or the like.
  • the user I / F 33 includes, for example, an information input device such as a touch panel that accepts user input, a keyboard, a button, and a mouse, and an information output device such as a liquid crystal display that outputs a calculation result of the CPU 31.
  • the communication I / F 34 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and can communicate with the server 10 and the moving image providing apparatus 20 via the communication network 40. Composed.
  • the storage 35 is composed of, for example, a magnetic disk drive, a flash memory, or the like, and stores various programs such as an operating system.
  • the storage 35 can store various applications received from the server 10 or the like.
  • the various sensors 36 include, for example, an acceleration sensor, a gyro sensor (angular velocity sensor), a geomagnetic sensor, and the like. Based on information detected by these sensors, the terminal device 30 can specify the posture, inclination, direction, and the like of the terminal device 30 itself.
  • the terminal device 30 includes, for example, a web browser for interpreting an HTML file (HTML data) and displaying the screen, and interprets and receives the HTML data acquired from the server 10 by the function of the web browser. A web page corresponding to the HTML data thus displayed can be displayed.
  • the web browser of the terminal device 30 can incorporate plug-in software that can execute various types of files associated with HTML data.
  • the user of the terminal device 30 uses the video distribution service provided by the server 10, for example, HTML data, an animation instructed by an application, an operation icon, and the like are displayed on the terminal device 30 on the screen.
  • the user can input various instructions using the touch panel of the terminal device 30 or the like.
  • the instruction input from the user is transmitted to the server 10 via the function of the application execution environment such as the web browser of the terminal device 30 and NgCore (trademark).
  • FIG. 2 is a block diagram schematically illustrating functions of the server 10 and the terminal device 30 according to an embodiment.
  • the server 10 includes an information storage unit 51 that stores information, a moving image distribution control unit 52 that controls distribution of moving images, and a virtual that manages a virtual space associated with the entire field of view of moving images.
  • a space management unit 53 These functions are realized by the cooperative operation of hardware such as the CPU 11 and the main memory 12 and various programs and tables stored in the storage 15, for example, instructions included in the loaded program This is realized by the CPU 11 executing. 2 may be realized by the cooperation of the server 10 and the terminal device 30 or may be realized by the terminal device 30.
  • the information storage unit 51 is realized by the storage 15 or the like, and as illustrated in FIG. 2, a user management table 51a for managing information related to a user of the video distribution service, and comments (input information) input by the user A comment management table 51b for managing information related to
  • FIG. 3 shows an example of information managed in the user management table 51a in the embodiment.
  • the user management table 51a is associated with a “user ID” that identifies an individual user, and includes information such as “nickname” of this user and “avatar information” that is information related to the user's avatar. to manage. These pieces of information can be provided from the user at a timing such as new user registration of the moving image distribution service, and can be updated as appropriate thereafter.
  • FIG. 4 shows an example of information managed in the comment management table 51b in one embodiment.
  • the comment management table 51b is associated with a combination of “moving image ID” that identifies an individual moving image and “comment ID” that identifies an individual comment, and identifies the user who has input this comment.
  • the comment management table 51b manages information about comments for each moving image to be distributed.
  • the 360-degree moving image in one embodiment is associated with a virtual space in which the entire field of view is configured as an inner surface of a virtual sphere, and the comment management table 51b has an arrangement position on the virtual space.
  • a value specifying the position (coordinate) of is set.
  • the user may input support (favorable emotion) for a comment input by another user who is viewing (playing back) the same video as “Like”.
  • the number of Like in the comment management table 51b is set to the number of “Like” input for the comment.
  • the moving image distribution control unit 52 in one embodiment executes various controls related to moving image distribution.
  • the moving image distribution control unit 52 converts a real-time 360 degree moving image received from the moving image providing device 20 or the like into a streaming format and distributes it to the terminal device 30 or a streaming format real time received from the moving image providing device 20 or the like.
  • the 360 degree moving image is distributed to the terminal device 30.
  • the moving image distributed by the moving image distribution control unit 52 may include a three-dimensional moving image configured to be viewed stereoscopically by the user.
  • the virtual space management unit 53 executes various processes related to management of a virtual space associated with the entire field of view of the moving image. For example, when the user inputs a comment, the virtual space management unit 53 specifies a position in the virtual space included in the user's field of view, and places the comment at the specified position.
  • the virtual space and the user's field of view associated with the entire field of view of the 360-degree moving image in one embodiment will be described with reference to FIG.
  • the 360-degree moving image is configured as a moving image having the entire field of view on the inner surface (all or a part) of the virtual sphere S, and the line of sight of the user located at the center C of the sphere S
  • the user's visual field V is specified based on a predetermined viewing angle ⁇ . That is, in the 360 moving image, when the direction of the line of sight of the user located at the center C of the virtual sphere S is specified, the moving image of the portion included in the field of view based on the direction of the line of sight is displayed.
  • the visual field V is expressed as a curve, but the visual field V is a partial region of the inner surface of the sphere S as illustrated in FIG. 6.
  • the virtual space in one embodiment is configured as the inner surface of the sphere S, that is, associated with the entire field of view of the 360 moving images.
  • the virtual space management unit 53 in one embodiment transmits various pieces of information regarding the virtual space to the terminal device 30.
  • the virtual space management unit 53 can transmit information related to the comment object based on the comments arranged in the virtual space to the terminal device 30.
  • the virtual space management unit 53 receives an input of “Like” for the comment. For example, when the state in which the comment object is displayed at the user's point of interest continues for a predetermined effective time (for example, 10 seconds), the virtual space management unit 53 displays “ The input of “Like” is received via the terminal device 30.
  • a predetermined effective time for example, 10 seconds
  • the terminal device 30 includes a reproduction control unit 55 that controls reproduction of a moving image, and an input management unit 56 that manages input by a user.
  • These functions are realized by the cooperation of hardware such as the CPU 31 and the main memory 32, and various programs and tables stored in the storage 35. For example, instructions included in the loaded program This is realized by the CPU 31 executing.
  • part or all of the functions of the terminal device 30 illustrated in FIG. 2 can be realized by the cooperation of the server 10 and the terminal device 30, or can be realized by the server 10.
  • the playback control unit 55 executes various controls related to playback of moving images.
  • the reproduction control unit 55 displays the 360 degree moving image received from the server 10 on the terminal device 30 with a field of view specified by the user.
  • the playback control unit 55 identifies the direction of the user's line of sight according to the user's operation to change the attitude, tilt, orientation, and the like of the terminal device 30 or the flick / drag operation of the screen.
  • the moving image of the part included in the visual field determined based on the direction of the user's line of sight is displayed among the moving images of the entire visual field of.
  • the reproduction control unit 55 displays information related to the virtual space on the terminal device 30 based on various information related to the virtual space received from the server 10. For example, the reproduction control unit 55 displays information related to the comment on the terminal device 30 based on information related to the comment arranged in the virtual space. In addition, the reproduction control unit 55 outputs a sound based on various information related to the virtual space, for example, outputs a sound corresponding to the comment being placed in the virtual space.
  • the input management unit 56 in one embodiment executes various processes related to management of input by the user. For example, when the input management unit 56 detects that the state where the comment is displayed on the user's gaze point has continued for a predetermined effective time, the input management unit 56 recognizes the input of “Like” for the comment, Information indicating the input of “Like” is transmitted to the server 10.
  • a user who uses the video distribution service in one embodiment can select a desired video from a plurality of videos provided in the video distribution service via the terminal device 30 and reproduce it on the terminal device 30.
  • the server 10 that has received the video distribution request from the terminal device 30 sends the 360-degree video received in real time from the video providing device 20 or the like to the terminal device 30 in a streaming format.
  • two video playback screens having different screen configurations are provided as screens for playback of 360-degree video, and the user can select one of the screens to play the video. .
  • FIG. 7 is an example of a first video playback screen 60 that is one of the video playback screens.
  • the first moving image playback screen 60 in one embodiment is configured as a display area 61 that displays a 360-degree moving image with a specific field of view.
  • the display area 61 displays a moving image of a portion included in the visual field specified by the user among the entire visual field of the 360-degree moving image. For example, when the user's field of view (the direction of the line of sight) changes according to the user's operation to change the posture, tilt, orientation, and the like of the terminal device 30 or the flick / drag operation of the display area 61, Is displayed in the display area 61.
  • the display area 61 of the first moving image playback screen 60 is configured to display comments (comment objects) arranged in a virtual space included in the visual field so as to overlap the moving image of the portion included in the visual field of the user. Has been. Details will be described later.
  • the first moving image playback screen 60 is a moving image included in a specific field of view of a 360-degree moving image using, for example, a VR (Virtual Reality) glass or a VR headset that is equipped with a smartphone or the like.
  • a VR Virtual Reality
  • the first moving image playback screen 60 can be configured to be divided into a screen for the right eye and a screen for the left eye. Since it is considered that the user who uses the VR glasses or the VR headset cannot perform the flick / drag operation of the screen, the attitude, inclination, and orientation of the terminal device 30 (the VR glasses, the smartphone, etc.) Change the field of view by changing etc.
  • FIG. 8 is an example of a second video playback screen 70 that is one of a plurality of video playback screens.
  • the second moving image playback screen 70 in one embodiment is configured as a space including a virtual stage, and a display area 71 for displaying a 360-degree moving image with a specific field of view is virtual. Is placed on the stage. Similar to the display area 61 of the first moving image playback screen 60, the display area 71 displays a portion of the moving image included in the visual field specified by the user out of the entire visual field of the 360-degree moving image.
  • the user's field of view changes in response to an operation for changing the orientation, tilt, orientation, etc. of the terminal device 30 or a flick / drag operation of the display area 71, a moving image of a part included in the changed field of view is displayed. It is displayed in area 71.
  • the avatar 110 of the user who is viewing (playing back) the same video is arranged in the avatar display area 76 corresponding to the space before the virtual stage. ing.
  • the second moving image playback screen 70 has a comment input area 72 and a comment transmission button 74 displayed as “Send” at the bottom of the screen.
  • the user inputs a desired character string or the like as a comment in the comment input area 72 and selects the comment transmission button 74, the input comment is transmitted to the server 10.
  • a comment placement process executed by the server 10 in response to receiving a comment will be described.
  • FIG. 9 is a flowchart showing an example of comment placement processing in an embodiment.
  • the position on the virtual space where the input comment is placed is specified (step S110).
  • the position where the comment is arranged is specified to be a position on the virtual space included in the visual field (when the comment is input) of the user who input the comment.
  • Information regarding the visual field of the user who input the comment can be received from the terminal device 30 together with the input comment, for example.
  • FIG. 10A is a diagram for describing a user's gaze point FP and a gaze area FR according to an embodiment.
  • the center of the user's field of view V (the intersection of the direction of the user's line of sight and the virtual space) is defined as the gazing point FP that the user is gazing at, and the gazing point FP is the center.
  • a circular area having a predetermined radius is defined as a gaze area FR.
  • the position where the comment is arranged is specified as a position in which the direction of moving (separating from) the gazing point FP is specified and moved in the specified direction by a distance corresponding to the radius of the gazing area FR.
  • the direction of moving from the gazing point FP is specified at random, for example.
  • FIG. 10B for example, a building that appears in the moving image displayed in the display area 71 (view V) of the second moving image playback screen 70 is being viewed (the building is displayed at the gazing point FP).
  • the position outside the range of the gaze area FR is a position moved in a direction away from the building, and is difficult to overlap the building.
  • step S120 When the position where the comment is arranged is specified in this way, next, the comment is arranged at the specified position in the virtual space (step S120), and this comment arrangement processing is ended. Specifically, information related to the comment is registered in the comment management table 51b.
  • a time obtained by adding a predetermined arrangement duration (for example, 30 seconds) to the current time is set as the deletion time.
  • FIG. 11A illustrates the first moving image playback screen 60 displayed on the user terminal device 30 including the position in the virtual space in which the comment is arranged in the field of view.
  • the comment object 114 is displayed so as to overlap the moving image at a position on the virtual space where the comment is arranged.
  • FIG. 11B illustrates details of the comment object 114.
  • the comment object 114 includes a user's avatar 110 that has input a comment, and a balloon object 112.
  • the content of the comment (“Excellent!” In the example of FIG. 11B), the nickname of the user who input the comment (“by XXX” in the example of FIG. 11B), and the comment are input.
  • the number of Likes is displayed.
  • the comment is arranged at a position away from the gazing point FP.
  • the corresponding comment object 114 is displayed at a position away from the building.
  • FIG. 12 illustrates the first moving image playback screen 60 in which a plurality of comment objects 114 based on comments input by each of a plurality of users are displayed in the display area 61.
  • a plurality of comment objects 114 based on comments input by each of a plurality of users are displayed in the display area 61.
  • the direction away from the gazing point FP when specifying the position where the comment is arranged is for each comment (for example, randomly) Since it is specified, it is suppressed that the plurality of comment objects 114 are displayed in an overlapping manner at a position moved in the same direction from the gazing point FP.
  • FIG. 13 exemplifies a second moving image reproduction screen 70 displayed on the user terminal device 30 including the position where the comment is arranged in the field of view.
  • the avatar 110 of the user who views the same video is displayed in the avatar area 76, and the avatar 110 displayed in the avatar area 76 is arranged in the virtual space.
  • the above-described balloon object 112 is added to the avatar 110 of the user who has input the comment being displayed.
  • the comment placed in the virtual space is displayed on the terminal device 30 of the user including the position where the comment is placed in the field of view.
  • the comment object 114 is displayed in the display area 61 so as to be superimposed on the moving image at the position in the virtual space where the comment is arranged.
  • the balloon object 112 is added to the avatar 110 of the user who has input the comment in the avatar display area 76 and displayed.
  • the position of the comment and the position of the user's field of view (gaze point) in each terminal device 30 of a plurality of users viewing the same video Sound effects based on the relationship are output.
  • the sound effect output from the terminal device 30 may be configured such that the sound volume increases as the user's field of view (gaze point) is closer to the position where the comment is placed.
  • FIG. 14 illustrates the volume of the sound effect set based on the positional relationship between the position where the comment is placed and the user's field of view.
  • the sound effect S1 output from the terminal device 30 of the user is commented at a position not included in the user's visual field V.
  • the volume is larger than the sound effects S2 and S3 when C2 and C3 are arranged.
  • the sound effect S2 when the comment C2 is arranged is the sound effect S3 when the comment C3 (the distance between the position where the comment is arranged and the user's visual field V is farther than the comment C2) is arranged. Is louder than.
  • the distance between the position where the comment is arranged and the visual field of the user is the angle between the direction in which the comment is arranged and the direction of the user's line of sight with respect to the center of the sphere S ( ⁇ 1, ⁇ 2 in FIG. 14).
  • the user can feel a sense of realism by configuring the volume so that the closer the user's field of view (gaze point) and the position where the comment is placed, the greater the volume.
  • the sound effect is output as a sound in a direction (for example, right direction or left direction) where the comment is arranged with reference to the user's visual field (gaze point).
  • a direction for example, right direction or left direction
  • the sound effect S2 corresponding to the arrangement of the comment C2 arranged on the left side of the visual field is output as a sound audible from the left direction, and corresponds to the arrangement of the comment C3 arranged on the right side of the user's visual field.
  • the sound effect S3 is output as a sound that can be heard from the right direction.
  • a sound effect is output from each terminal device 30 of a plurality of users who view the same video, so the user is not included in the field of view. It is possible to know the input of a comment including a comment arranged at a position.
  • a comment placed in the virtual space is deleted (arranged) when an erasing time set for each comment is reached.
  • the comment object 114 displayed on the moving image in the display area 61 of the first moving image reproduction screen 60 is also erased (cannot be displayed).
  • the balloon object 112 displayed in the avatar display area 76 of the second video playback screen 70 may be configured to be deleted in response to the deletion of the comment. It may be erased at different timings. That is, in one embodiment, the deletion of the comment object 114 on the first moving image playback screen 60 and the deletion of the balloon object 112 on the second moving image playback screen 70 can be controlled independently.
  • a comment having a larger number of accepted Likes is configured to have a longer time until the comment is deleted. Specifically, for example, every time the number of Likes reaches a predetermined number (for example, 10), a predetermined additional time (for example, 10 seconds) is added to the erasing time.
  • a predetermined number for example, 10
  • a predetermined additional time for example, 10 seconds
  • FIG. 16 exemplifies a change in the display of the comment object 114 until the state where the comment object 114 is positioned at the point of gaze continues for an effective time.
  • a certain time for example, 3 seconds
  • the background color of the balloon object 112 changes and the progress gauge 113 is added (i).
  • the display of the progress gauge 113 changes with the passage of time, and when the valid time is reached (ii), the progress gauge 113 is erased and the background color of the balloon object 112 is restored to the original, and the input of “Like” is accepted. And the display of the number of Likes is updated (iii).
  • the server 10 Upon accepting “Like” via the terminal device 30, the server 10 updates the number of Likes in the comment management table 51b. Further, as described above, the erase time can be updated (added) as the number of Likes increases.
  • the input of “Like” for the comment corresponding to the balloon object 112 Is accepted.
  • the operation of accepting the input of “Like” and updating the display of the number of Likes is the same as in the case of the first moving image playback screen 60 described above.
  • a 360-degree moving image distributed in a streaming format in real time is stored in the information storage unit 51 of the server 10, and the stored moving image is used as a request from the terminal device 30. Accordingly, it can be configured so that it can be played back later.
  • the comment input at the time of live streaming distribution is arranged in the virtual space according to the information managed in the comment management table 51b. Specifically, corresponding comments are arranged in the virtual space according to the arrangement time, arrangement position, and deletion time of the comment management table 51b, and then deleted.
  • the comment is input via the second video playback screen 70.
  • a comment corresponding to the first video playback screen 60 can be input. It can also be configured.
  • the first moving image playback screen 160 in another embodiment illustrated in FIG. 17 has a display area 61 similar to the first moving image playback screen 60 described above, and the second moving image is displayed at the lower end of the display area 61.
  • a comment input area 72 and a comment transmission button 74 similar to those on the reproduction screen 70 are arranged.
  • comment input can also be realized by applying a voice input technique.
  • a comment is exemplified as input information that is input from a user and arranged in the virtual space.
  • the input information is not limited to a comment.
  • various information that can be input from the user such as a stamp and an icon, can be included in the input information.
  • the sound effect output according to the arrangement of the comment is configured such that the sound volume increases as the user's field of view and the position where the comment is arranged are closer to each other.
  • the timbre and pitch of the sound effect Etc. can vary.
  • each terminal device 30 of a plurality of users is configured with a plurality of moving images that are configured as a moving image having a wide-angle visual field and a virtual space is associated with the entire visual field.
  • a comment is received from the user
  • the position in the virtual space included in the user's field of view is specified and arranged, and the arranged position is viewed according to the arrangement of the comment.
  • the comment is displayed on the terminal device 30 of the user included. Therefore, the user of the terminal device 30 in which the comment is displayed has a field of view close to the field of view of the user who has input the comment.
  • the embodiment of the present invention can appropriately display information such as a comment input in a moving image that may have different fields of view among users.
  • the processes and procedures described in this specification are realized by software, hardware, or any combination thereof other than those explicitly described in the embodiment. More specifically, the processes and procedures described in this specification are performed by mounting logic corresponding to the processes on a medium such as an integrated circuit, a volatile memory, a nonvolatile memory, a magnetic disk, or an optical storage. Realized. Further, the processes and procedures described in this specification can be implemented as a computer program and executed by various computers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The purpose of the present invention is to appropriately display information such as a comment inputted in a moving image with a field of view that may differ between users. A system according to one embodiment displays, on a terminal device of each of a plurality of users, a moving image which is constructed as a moving image having a wide-angle field of view and the total field of view of which is associated with a virtual space with the respective field of views of the plurality of users, and when accepting a comment from a user, specifies a position on the virtual space included in the field of view of the user, places the comment at the position, and according to the placement of the comment, displays the comment on the terminal device of a user including the position at which the comment is placed in the field of view thereof.

Description

動画を特定の視野で表示するシステム、方法、及びプログラムSystem, method, and program for displaying moving image with specific visual field
相互参照
 本出願は、日本国特許出願2015-162720(2015年8月20日出願)に基づく優先権を主張し、その内容は参照により全体として本明細書に組み込まれる。
 本発明は、動画を特定の視野で表示するシステム、方法、及びプログラムに関する。
This cross-referenced application claims priority based on Japanese Patent Application No. 2015-162720 (filed Aug. 20, 2015), the contents of which are hereby incorporated by reference in their entirety.
The present invention relates to a system, method, and program for displaying a moving image with a specific field of view.
 従来より、同じ動画を閲覧するユーザ間でのコミュニケーションを活性化するために、ユーザによって付与されたコメントを動画と共に表示するシステムが提案されている(例えば、特許文献1参照)。こうしたシステムでは、例えば、コメントが付与された動画再生時間において、動画と共にコメントが表示されるようになっている。 Conventionally, in order to activate communication between users who browse the same video, a system that displays a comment given by a user together with a video has been proposed (for example, see Patent Document 1). In such a system, for example, a comment is displayed together with a moving image during a moving image reproduction time when the comment is given.
 また、近年、略全方向を一度に撮影できるカメラ等を用いて撮影された「360度動画」と呼ばれる動画の配信が行われている。こうした動画は、例えば、ユーザ端末の姿勢を変える(方向を変える/傾ける)操作、又は、画面のフリック/ドラッグ操作等により、表示する方向(視野)が連続的に変化するようになっており、ユーザは、視野を変化させながら動画を閲覧することができる。そして、こうした動画においても、同じ動画を閲覧するユーザ間でのコミュニケーションを活性化するために、ユーザによって付与されるコメント等の情報を共有できることが望ましい。そこで、上述した従来のシステムのように、コメントが付与された動画再生時間においてコメントを表示することも考えられる。 In recent years, a moving image called “360-degree moving image” shot using a camera that can shoot almost all directions at once has been distributed. In such videos, the display direction (field of view) changes continuously, for example, by changing the orientation of the user terminal (changing direction / tilting) or flicking / dragging the screen. The user can view the moving image while changing the field of view. And in such a moving image, it is desirable that information such as a comment given by the user can be shared in order to activate communication between users who view the same moving image. Therefore, it is also conceivable to display the comment during the moving image reproduction time to which the comment is given as in the conventional system described above.
特開2015-57896号公報Japanese Patent Laying-Open No. 2015-57896
 しかしながら、360度動画のようにユーザ間で視野が異なり得る動画では、同じ動画を閲覧しているユーザであっても、視野によって見ている対象が異なり得る。従って、例えば、コメントを付与したユーザの視野とは異なる視野で動画を閲覧している他のユーザは、コメントを付与したユーザと見ている対象が違うため、コメントの内容に共感することが難しい。そして、コメントの内容に共感できないとコミュニケーションの活性化も制限されてしまうから、ユーザ間で視野の異なり得る動画において、コメント等の情報を適切に表示することが望まれる。 However, in a video that can have different fields of view among users, such as a 360-degree video, even if the user is viewing the same video, the target being viewed may vary depending on the field of view. Therefore, for example, it is difficult for another user who is viewing a video with a field of view different from the field of view of the user who provided the comment to sympathize with the content of the comment because the target viewed is different from the user who provided the comment. . If the content of the comment cannot be sympathized, the activation of communication is also limited. Therefore, it is desirable to appropriately display information such as a comment in a moving image that may have different fields of view among users.
 本発明の実施形態は、ユーザ間で視野の異なり得る動画において入力されるコメント等の情報を適切に表示することを目的の一つとする。本発明の実施形態の他の目的は、本明細書全体を参照することにより明らかとなる。 An embodiment of the present invention has an object to appropriately display information such as comments input in moving images that may have different fields of view among users. Other objects of the embodiments of the present invention will become apparent by referring to the entire specification.
 本発明の一実施形態に係るシステムは、動画を特定の視野で表示するシステムであって、1又は複数のコンピュータプロセッサを備え、前記1又は複数のコンピュータプロセッサは、読取可能な命令を実行することに応じて、第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を実行する。 A system according to an embodiment of the present invention is a system that displays a moving image in a specific field of view, and includes one or more computer processors, and the one or more computer processors execute readable instructions. Accordingly, a specific video that is configured as a video having a wide-angle field of view and a virtual space is associated with the entire field of view in each terminal device of a plurality of users including the first and second users, A step of displaying each field of view of the plurality of users, and a first position in the virtual space included in the field of view of the first user when first input information is received from the first user. And specifying the first input information at the first position, and according to the arrangement of the first input information, the second user's end including the first position in the field of view The device executes a step of displaying the first input information.
 上述した一実施形態に係るシステムにおいて、前記特定の動画は、少なくとも水平方向に360度の視野を有する動画として構成され、前記仮想空間は、仮想的な球体の内面として構成され得る。こうした動画は「360度動画」と呼ばれることがあり、垂直方向の視野は、例えば、180-360度の範囲にある。 In the system according to the embodiment described above, the specific moving image may be configured as a moving image having a visual field of 360 degrees in at least the horizontal direction, and the virtual space may be configured as an inner surface of a virtual sphere. Such a moving image may be referred to as a “360-degree moving image”, and the vertical visual field is, for example, in the range of 180-360 degrees.
 本発明の一実施形態に係る方法は、1又は複数のコンピュータによって実行され、動画を特定の視野で表示する方法であって、第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を備える。 A method according to an embodiment of the present invention is a method for displaying a moving image with a specific field of view, which is executed by one or a plurality of computers, and each terminal device of a plurality of users including a first user and a second user. And displaying a specific moving image having a wide-angle visual field and a virtual space associated with the entire visual field in each visual field of the plurality of users; Identifying a first position in the virtual space included in the field of view of the first user when receiving the first input information, and arranging the first input information at the first position; Displaying the first input information on the terminal device of the second user including the first position in the field of view according to the arrangement of the first input information.
 本発明の一実施形態に係るプログラムは、動画を特定の視野で表示するプログラムであって、1又は複数のコンピュータ上で実行されることに応じて、前記1又は複数のコンピュータに、第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を実行させる。 A program according to an embodiment of the present invention is a program for displaying a moving image in a specific field of view, and is executed on the one or more computers in accordance with the first and the first computers. Each terminal device of a plurality of users including the second user is provided with a specific moving image that is configured as a moving image having a wide-angle field of view and has a virtual space associated with the entire field of view of each of the plurality of users. A step of displaying in a field of view, and when receiving first input information from the first user, a first position in the virtual space included in the field of view of the first user is specified, and the first Arranging the first input information at the position of the first user, and in response to the arrangement of the first input information, the terminal device of the second user including the first position in the field of view, Display input information A step that causes the execution.
 本発明の様々な実施形態によって、ユーザ間で視野の異なり得る動画において入力されるコメント等の情報を適切に表示することができる。 Various embodiments of the present invention can appropriately display information such as comments input in moving images that can have different fields of view among users.
本発明の一実施形態に係るシステム1を含むネットワークの構成を概略的に示す構成図。1 is a configuration diagram schematically showing a configuration of a network including a system 1 according to an embodiment of the present invention. 一実施形態におけるシステム1(サーバ10及び端末装置30)が有する機能を概略的に示すブロック図。The block diagram which shows roughly the function which the system 1 (server 10 and the terminal device 30) in one Embodiment has. 一実施形態におけるユーザ管理テーブル51aにおいて管理される情報の一例を示す図。The figure which shows an example of the information managed in the user management table 51a in one Embodiment. 一実施形態におけるコメント管理テーブル51bにおいて管理される情報の一例を示す図。The figure which shows an example of the information managed in the comment management table 51b in one Embodiment. 一実施形態における仮想空間及びユーザの視野を説明するための図。The figure for demonstrating the virtual space and user's visual field in one Embodiment. 一実施形態におけるユーザの視野を説明するための図。The figure for demonstrating the user's visual field in one Embodiment. 一実施形態における第1の動画再生画面60の一例を示す図。The figure which shows an example of the 1st moving image reproduction screen 60 in one Embodiment. 一実施形態における第2の動画再生画面70の一例を示す図。The figure which shows an example of the 2nd moving image reproduction screen 70 in one Embodiment. 一実施形態におけるコメント配置処理の一例を示すフロー図。The flowchart which shows an example of the comment arrangement | positioning process in one Embodiment. 一実施形態におけるユーザの注視点FP及び注視領域FRを説明するための図。The figure for demonstrating the user's gaze point FP and gaze area | region FR in one Embodiment. 一実施形態における第1の動画再生画面60及びコメントオブジェクト114の一例を示す図。The figure which shows an example of the 1st moving image reproduction screen 60 and the comment object 114 in one Embodiment. 一実施形態における第1の動画再生画面60の一例を示す図。The figure which shows an example of the 1st moving image reproduction screen 60 in one Embodiment. 一実施形態における第2の動画再生画面70の一例を示す図。The figure which shows an example of the 2nd moving image reproduction screen 70 in one Embodiment. 一実施形態におけるコメントの配置に応じて出力される音を説明するための図。The figure for demonstrating the sound output according to arrangement | positioning of the comment in one Embodiment. 一実施形態における第1の動画再生画面60の一例を示す図。The figure which shows an example of the 1st moving image reproduction screen 60 in one Embodiment. 一実施形態におけるコメントオブジェクト114の表示の一例を示す図。The figure which shows an example of the display of the comment object 114 in one Embodiment. 他の実施形態における第1の動画再生画面160の一例を示す図。The figure which shows an example of the 1st moving image reproduction screen 160 in other embodiment.
 以下、適宜図面を参照し、本発明の様々な実施形態を説明する。なお、図面において共通する構成要素には同一の参照符号が付されている。 Hereinafter, various embodiments of the present invention will be described with reference to the drawings as appropriate. In addition, the same referential mark is attached | subjected to the component which is common in drawing.
 図1は、本発明の一実施形態に係るシステム1を含むネットワークの構成を概略的に示す構成図である。一実施形態におけるシステム1は、図示するように、サーバ10と、このサーバ10とインターネット等の通信網40を介して通信可能に接続された複数の端末装置30と、を備える。サーバ10は、様々な動画を端末装置30に配信する動画配信サービスを提供する。一実施形態における動画配信サービスにおいて配信される動画には、動画提供装置20が提供するリアルタイムの動画(ライブ動画)が含まれる。 FIG. 1 is a configuration diagram schematically showing the configuration of a network including a system 1 according to an embodiment of the present invention. The system 1 in one embodiment includes a server 10 and a plurality of terminal devices 30 that are communicably connected to the server 10 via a communication network 40 such as the Internet, as illustrated. The server 10 provides a moving image distribution service that distributes various moving images to the terminal device 30. The moving image distributed in the moving image distribution service in the embodiment includes a real-time moving image (live moving image) provided by the moving image providing apparatus 20.
 一実施形態におけるサーバ10は、一般的なコンピュータとして構成されており、図示のとおり、CPU(コンピュータプロセッサ)11と、メインメモリ12と、ユーザI/F13と、通信I/F14と、ストレージ(記憶装置)15と、を含み、これらの各構成要素がバスを介して互いに電気的に接続されている。CPU11は、ストレージ15からオペレーティングシステムやその他様々なプログラムをメインメモリ12にロードし、このロードしたプログラムに含まれる命令を実行する。メインメモリ12は、CPU11が実行するプログラムを格納するために用いられ、例えば、DRAM等によって構成される。なお、一実施形態におけるサーバ10は、それぞれ上述したようなハードウェア構成を有する複数のコンピュータを用いて構成され得る。 The server 10 in one embodiment is configured as a general computer, and as illustrated, a CPU (computer processor) 11, a main memory 12, a user I / F 13, a communication I / F 14, and storage (storage). Device) 15, and these components are electrically connected to each other via a bus. The CPU 11 loads an operating system and various other programs from the storage 15 into the main memory 12 and executes instructions included in the loaded programs. The main memory 12 is used for storing a program executed by the CPU 11, and is configured by a DRAM or the like, for example. In addition, the server 10 in one embodiment may be configured using a plurality of computers each having a hardware configuration as described above.
 ユーザI/F13は、例えば、オペレータの入力を受け付けるキーボードやマウス等の情報入力装置と、CPU11の演算結果を出力する液晶ディスプレイ等の情報出力装置とを含む。通信I/F14は、ハードウェア、ファームウェア、又はTCP/IPドライバやPPPドライバ等の通信用ソフトウェア又はこれらの組み合わせとして実装され、通信網40を介して動画提供装置20及び端末装置30と通信可能に構成される。 The user I / F 13 includes, for example, an information input device such as a keyboard and a mouse that accepts an operator's input, and an information output device such as a liquid crystal display that outputs a calculation result of the CPU 11. The communication I / F 14 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and can communicate with the video providing device 20 and the terminal device 30 via the communication network 40. Composed.
 ストレージ15は、例えば磁気ディスクドライブで構成され、動画配信サービスを提供するための制御用プログラム等の様々なプログラムが記憶される。また、ストレージ15には、動画配信サービスを提供するための各種データも記憶され得る。ストレージ15に記憶され得る各種データは、サーバ10と通信可能に接続されるサーバ10とは物理的に別体のデータベースサーバ等に格納されてもよい。 The storage 15 is composed of, for example, a magnetic disk drive, and stores various programs such as a control program for providing a moving image distribution service. The storage 15 can also store various data for providing the moving image distribution service. Various data that can be stored in the storage 15 may be stored in a database server or the like that is physically separate from the server 10 that is communicably connected to the server 10.
 一実施形態において、サーバ10は、階層構造の複数のウェブページから成るウェブサイトを管理するウェブサーバとしても機能し、こうしたウェブサイトを介して動画配信サービスを端末装置30のユーザに対して提供し得る。ストレージ15には、このウェブページに対応するHTMLデータも記憶され得る。HTMLデータは、様々な画像データが関連付けられ、又、JavaScript(登録商標)等のスクリプト言語等で記述された様々なプログラムが埋め込まれ得る。 In one embodiment, the server 10 also functions as a web server that manages a website composed of a plurality of web pages in a hierarchical structure, and provides a video distribution service to the user of the terminal device 30 through such a website. obtain. The storage 15 can also store HTML data corresponding to this web page. HTML data is associated with various image data, and various programs described in a script language such as JavaScript (registered trademark) can be embedded.
 また、一実施形態において、サーバ10は、端末装置30においてウェブブラウザ以外の実行環境上で実行されるアプリケーションを介して動画配信サービスを提供し得る。ストレージ15には、こうしたアプリケーションも記憶され得る。このアプリケーションは、例えば、Objective-CやJava(登録商標)等のプログラミング言語を用いて作成される。ストレージ15に記憶されたアプリケーションは、配信要求に応じて端末装置30に配信される。なお、端末装置30は、こうしたアプリケーションを、サーバ10以外の他のサーバ(アプリマーケットを提供するサーバ等)等からダウンロードすることもできる。 In one embodiment, the server 10 can provide a video distribution service via an application executed in an execution environment other than the web browser in the terminal device 30. Such applications can also be stored in the storage 15. This application is created using a programming language such as Objective-C or Java (registered trademark). The application stored in the storage 15 is distributed to the terminal device 30 in response to the distribution request. Note that the terminal device 30 can also download such an application from a server other than the server 10 (such as a server providing an application market).
 このように、サーバ10は、動画配信サービスを提供するためのウェブサイトを管理し、当該ウェブサイトを構成するウェブページ(HTMLデータ)を端末装置30からの要求に応答して配信することができる。また、上述したように、サーバ10は、このようなウェブページ(ウェブブラウザ)を用いた動画配信サービスの提供とは代替的に、又は、これに加えて、端末装置30において実行されるアプリケーションとの通信に基づいて動画配信サービスを提供することができる。いずれの態様で当該サービスを提供するにしても、サーバ10は、動画配信サービスの提供に必要な各種データ(画面表示に必要なデータを含む)を端末装置30との間で送受信することができる。また、サーバ10は、各ユーザを識別する識別情報(例えば、ユーザID)毎に各種データを記憶し、ユーザ毎に動画配信サービスの提供状況を管理することができる。詳細な説明は省略するが、サーバ10は、ユーザの認証処理や課金処理等を行う機能を有することもできる。 In this way, the server 10 can manage the website for providing the video distribution service and distribute the web page (HTML data) constituting the website in response to a request from the terminal device 30. . In addition, as described above, the server 10 may be an application executed in the terminal device 30 in place of or in addition to the provision of the moving image distribution service using such a web page (web browser). The video distribution service can be provided based on the communication. Regardless of which aspect of the service is provided, the server 10 can transmit and receive various data (including data necessary for screen display) necessary for providing the video distribution service to and from the terminal device 30. . In addition, the server 10 can store various data for each identification information (for example, user ID) for identifying each user, and can manage the provision status of the video distribution service for each user. Although detailed description is omitted, the server 10 may have a function of performing user authentication processing, billing processing, and the like.
 一実施形態における動画提供装置20は、一般的なコンピュータとして構成されており、図1に示すとおり、CPU(コンピュータプロセッサ)21と、メインメモリ22と、ユーザI/F23と、通信I/F24と、ストレージ(記憶装置)25と、超広角カメラ26と、を含み、これらの各構成要素がバスを介して互いに電気的に接続されている。 The moving image providing apparatus 20 according to the embodiment is configured as a general computer, and as illustrated in FIG. 1, a CPU (computer processor) 21, a main memory 22, a user I / F 23, a communication I / F 24, and the like. , A storage (storage device) 25 and an ultra-wide-angle camera 26, and these components are electrically connected to each other via a bus.
 CPU21は、ストレージ25からオペレーティングシステムやその他様々なプログラムをメインメモリ22にロードし、このロードしたプログラムに含まれる命令を実行する。メインメモリ22は、CPU21が実行するプログラムを格納するために用いられ、例えば、DRAM等によって構成される。 The CPU 21 loads an operating system and various other programs from the storage 25 to the main memory 22 and executes instructions included in the loaded programs. The main memory 22 is used for storing a program executed by the CPU 21 and is configured by, for example, a DRAM or the like.
 ユーザI/F23は、例えば、オペレータの入力を受け付ける情報入力装置と、CPU21の演算結果を出力する情報出力装置とを含む。通信I/F24は、ハードウェア、ファームウェア、又はTCP/IPドライバやPPPドライバ等の通信用ソフトウェア又はこれらの組み合わせとして実装され、通信網40を介してサーバ10及び端末装置30と通信可能に構成される。 The user I / F 23 includes, for example, an information input device that receives an operator input and an information output device that outputs a calculation result of the CPU 21. The communication I / F 24 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and is configured to be able to communicate with the server 10 and the terminal device 30 via the communication network 40. The
 超広角カメラ26は、マイクを内蔵しており、超広角レンズ、又は、複数のレンズを介して超広角の映像を撮影するように構成されている。一実施形態において、超広角カメラ26は、水平方向に360度の視野を有すると共に垂直方向に180-360度の範囲の視野を有する360度カメラとして構成されており、一実施形態における動画提供装置20は、当該超広角カメラ26を介して撮影される略全方向の視野を有する360度動画を、サーバ10に対してリアルタイムに送信するように構成されている。 The ultra-wide-angle camera 26 has a built-in microphone and is configured to capture an ultra-wide-angle image via an ultra-wide-angle lens or a plurality of lenses. In one embodiment, the super wide-angle camera 26 is configured as a 360-degree camera having a 360-degree field of view in the horizontal direction and a field-of-view range of 180-360 degrees in the vertical direction. 20 is configured to transmit a 360-degree moving image having a substantially omnidirectional field of view photographed through the super wide-angle camera 26 to the server 10 in real time.
 一実施形態における端末装置30は、サーバ10が提供するウェブサイトのウェブページをウェブブラウザ上で表示すると共にアプリケーションを実行するための実行環境を実装した任意の情報処理装置であり、スマートフォン、タブレット端末、ウェアラブルデバイス(例えば、ヘッドマウントディスプレイ等)、パーソナルコンピュータ、及びゲーム専用端末等が含まれ得る。 The terminal device 30 according to the embodiment is an arbitrary information processing device that displays a web page of a website provided by the server 10 on a web browser and implements an execution environment for executing an application. , Wearable devices (eg, head-mounted displays), personal computers, gaming terminals, and the like.
 端末装置30は、一般的なコンピュータとして構成されており、図1に示すとおり、CPU(コンピュータプロセッサ)31と、メインメモリ32と、ユーザI/F33と、通信I/F34と、ストレージ(記憶装置)35と、各種センサ36と、を含み、これらの各構成要素がバスを介して互いに電気的に接続されている。 The terminal device 30 is configured as a general computer, and as shown in FIG. 1, a CPU (computer processor) 31, a main memory 32, a user I / F 33, a communication I / F 34, a storage (storage device) ) 35 and various sensors 36, each of which is electrically connected to each other via a bus.
 CPU31は、ストレージ35からオペレーティングシステムやその他様々なプログラムをメインメモリ32にロードし、このロードしたプログラムに含まれる命令を実行する。メインメモリ32は、CPU31が実行するプログラムを格納するために用いられ、例えば、DRAM等によって構成される。 The CPU 31 loads an operating system and various other programs from the storage 35 to the main memory 32 and executes instructions included in the loaded programs. The main memory 32 is used for storing a program executed by the CPU 31, and is configured by, for example, a DRAM or the like.
 ユーザI/F33は、例えば、ユーザの入力を受け付けるタッチパネル、キーボード、ボタン及びマウス等の情報入力装置と、CPU31の演算結果を出力する液晶ディスプレイ等の情報出力装置とを含む。通信I/F34は、ハードウェア、ファームウェア、又は、TCP/IPドライバやPPPドライバ等の通信用ソフトウェア又はこれらの組み合わせとして実装され、通信網40を介してサーバ10及び動画提供装置20と通信可能に構成される。 The user I / F 33 includes, for example, an information input device such as a touch panel that accepts user input, a keyboard, a button, and a mouse, and an information output device such as a liquid crystal display that outputs a calculation result of the CPU 31. The communication I / F 34 is implemented as hardware, firmware, communication software such as a TCP / IP driver or a PPP driver, or a combination thereof, and can communicate with the server 10 and the moving image providing apparatus 20 via the communication network 40. Composed.
 ストレージ35は、例えば磁気ディスクドライブやフラッシュメモリ等により構成され、オペレーティングシステム等の様々なプログラムが記憶される。また、ストレージ35は、サーバ10等から受信した様々なアプリケーションが記憶され得る。 The storage 35 is composed of, for example, a magnetic disk drive, a flash memory, or the like, and stores various programs such as an operating system. The storage 35 can store various applications received from the server 10 or the like.
 各種センサ36は、例えば、加速度センサ、ジャイロセンサ(角速度センサ)、地磁気センサ等を含む。これらのセンサによって検知される情報に基づいて、端末装置30は、端末装置30自身の姿勢、傾き、方位等を特定することが出来る。 The various sensors 36 include, for example, an acceleration sensor, a gyro sensor (angular velocity sensor), a geomagnetic sensor, and the like. Based on information detected by these sensors, the terminal device 30 can specify the posture, inclination, direction, and the like of the terminal device 30 itself.
 端末装置30は、例えば、HTML形式のファイル(HTMLデータ)を解釈して画面表示するためのウェブブラウザを備えており、このウェブブラウザの機能によりサーバ10から取得したHTMLデータを解釈して、受信したHTMLデータに対応するウェブページを表示することができる。また、端末装置30のウェブブラウザには、HTMLデータに関連付けられた様々な形式のファイルを実行可能なプラグインソフトが組み込まれ得る。 The terminal device 30 includes, for example, a web browser for interpreting an HTML file (HTML data) and displaying the screen, and interprets and receives the HTML data acquired from the server 10 by the function of the web browser. A web page corresponding to the HTML data thus displayed can be displayed. The web browser of the terminal device 30 can incorporate plug-in software that can execute various types of files associated with HTML data.
 端末装置30のユーザがサーバ10によって提供される動画配信サービスを利用する際には、例えば、HTMLデータやアプリケーションによって指示されたアニメーションや操作用アイコン等が端末装置30に画面表示される。ユーザは、端末装置30のタッチパネル等を用いて各種指示を入力することができる。ユーザから入力された指示は、端末装置30のウェブブラウザやNgCore(商標)等のアプリケーション実行環境の機能を介してサーバ10に伝達される。 When the user of the terminal device 30 uses the video distribution service provided by the server 10, for example, HTML data, an animation instructed by an application, an operation icon, and the like are displayed on the terminal device 30 on the screen. The user can input various instructions using the touch panel of the terminal device 30 or the like. The instruction input from the user is transmitted to the server 10 via the function of the application execution environment such as the web browser of the terminal device 30 and NgCore (trademark).
 次に、このように構成された一実施形態におけるシステム1(サーバ10及び端末装置30)が有する機能について説明する。一実施形態における動画配信サービスにおいては、様々な形式の動画が端末装置30に配信され得るが、ここでは、主に、動画提供装置20等から提供されるリアルタイムの360度動画を端末装置30に配信する機能について説明する。 Next, functions of the system 1 (the server 10 and the terminal device 30) in the embodiment configured as described above will be described. In the moving image distribution service according to the embodiment, various types of moving images can be distributed to the terminal device 30, but here, a real-time 360 degree moving image provided mainly from the moving image providing device 20 or the like is mainly transmitted to the terminal device 30. The function to deliver will be described.
 図2は、一実施形態におけるサーバ10及び端末装置30が有する機能を概略的に示すブロック図である。一実施形態におけるサーバ10は、図示するように、情報を記憶する情報記憶部51と、動画の配信を制御する動画配信制御部52と、動画の視野全体に対応付けられる仮想空間を管理する仮想空間管理部53と、を備える。これらの機能は、CPU11及びメインメモリ12等のハードウェア、並びに、ストレージ15に記憶されている各種プログラムやテーブル等が協働して動作することによって実現され、例えば、ロードしたプログラムに含まれる命令をCPU11が実行することによって実現される。また、図2に例示したサーバ10が有する機能の一部又は全部は、サーバ10と端末装置30とが協働することによって実現され、又は、端末装置30によって実現され得る。 FIG. 2 is a block diagram schematically illustrating functions of the server 10 and the terminal device 30 according to an embodiment. As illustrated, the server 10 according to the embodiment includes an information storage unit 51 that stores information, a moving image distribution control unit 52 that controls distribution of moving images, and a virtual that manages a virtual space associated with the entire field of view of moving images. A space management unit 53. These functions are realized by the cooperative operation of hardware such as the CPU 11 and the main memory 12 and various programs and tables stored in the storage 15, for example, instructions included in the loaded program This is realized by the CPU 11 executing. 2 may be realized by the cooperation of the server 10 and the terminal device 30 or may be realized by the terminal device 30.
 一実施形態における情報記憶部51は、ストレージ15等によって実現され、図2に示すように、動画配信サービスのユーザに関する情報を管理するユーザ管理テーブル51aと、ユーザによって入力されたコメント(入力情報)に関する情報を管理するコメント管理テーブル51bと、を有する。 The information storage unit 51 according to the embodiment is realized by the storage 15 or the like, and as illustrated in FIG. 2, a user management table 51a for managing information related to a user of the video distribution service, and comments (input information) input by the user A comment management table 51b for managing information related to
 図3は、一実施形態におけるユーザ管理テーブル51aにおいて管理される情報の一例を示す。ユーザ管理テーブル51aは、図示するように、個別のユーザを識別する「ユーザID」に対応付けて、このユーザの「ニックネーム」、このユーザのアバターに関する情報である「アバター情報」、等の情報を管理する。これらの情報は、例えば、動画配信サービスの新規ユーザ登録等のタイミングにおいてユーザから提供され、その後、適宜に更新され得る。 FIG. 3 shows an example of information managed in the user management table 51a in the embodiment. As shown in the figure, the user management table 51a is associated with a “user ID” that identifies an individual user, and includes information such as “nickname” of this user and “avatar information” that is information related to the user's avatar. to manage. These pieces of information can be provided from the user at a timing such as new user registration of the moving image distribution service, and can be updated as appropriate thereafter.
 図4は、一実施形態におけるコメント管理テーブル51bにおいて管理される情報の一例を示す。コメント管理テーブル51bは、図示するように、個別の動画を識別する「動画ID」と個別のコメントを識別する「コメントID」との組合せに対応付けて、このコメントを入力したユーザを識別する「入力ユーザID」、このコメントの内容を示す「コメント内容」、このコメントが配置される仮想空間上の位置を示す「配置位置」、このコメントが配置された時刻を示す「配置時刻」、このコメントを消去する(配置を解除する)時刻を示す「消去時刻」、このコメントに対して「Like」が入力された数を示す「Like数」、等の情報を管理する。このように、一実施形態におけるコメント管理テーブル51bは、配信される動画毎にコメントに関する情報を管理する。 FIG. 4 shows an example of information managed in the comment management table 51b in one embodiment. As shown in the figure, the comment management table 51b is associated with a combination of “moving image ID” that identifies an individual moving image and “comment ID” that identifies an individual comment, and identifies the user who has input this comment. "Input user ID", "comment content" indicating the content of this comment, "arrangement position" indicating the position in the virtual space where this comment is placed, "arrangement time" indicating the time when this comment is placed, this comment Information such as “erase time” indicating the time of deleting (releasing the arrangement) and “Like number” indicating the number of “Like” input for this comment is managed. As described above, the comment management table 51b according to the embodiment manages information about comments for each moving image to be distributed.
 ここで、一実施形態における360度動画は、その視野全体が、仮想的な球体の内面として構成される仮想空間に対応付けられており、コメント管理テーブル51bの配置位置には、この仮想空間上の位置(座標)を特定する値が設定される。 Here, the 360-degree moving image in one embodiment is associated with a virtual space in which the entire field of view is configured as an inner surface of a virtual sphere, and the comment management table 51b has an arrangement position on the virtual space. A value specifying the position (coordinate) of is set.
 また、一実施形態における動画配信サービスでは、ユーザは、同じ動画を閲覧(再生)している他のユーザ等によって入力されたコメントに対する支持(好意的な感情)を「Like」として入力することができ、コメント管理テーブル51bのLike数には、コメントに対する「Like」の入力数が設定される。コメント管理テーブル51bにおいて管理されるこれらの情報は、ユーザによるコメントの入力、コメントに対する「Like」の入力等に応じて適宜に更新される。 In addition, in the video distribution service according to the embodiment, the user may input support (favorable emotion) for a comment input by another user who is viewing (playing back) the same video as “Like”. The number of Like in the comment management table 51b is set to the number of “Like” input for the comment. These pieces of information managed in the comment management table 51b are appropriately updated according to the input of a comment by the user, the input of “Like” for the comment, and the like.
 一実施形態における動画配信制御部52は、動画の配信に関する様々な制御を実行する。例えば、動画配信制御部52は、動画提供装置20等から受信したリアルタイムの360度動画をストリーミング形式に変換して端末装置30に配信し、又は、動画提供装置20等から受信したストリーミング形式のリアルタイムの360度動画を端末装置30に配信する。一実施形態において、動画配信制御部52が配信する動画は、ユーザから立体的に見えるように構成された三次元動画を含み得る。 The moving image distribution control unit 52 in one embodiment executes various controls related to moving image distribution. For example, the moving image distribution control unit 52 converts a real-time 360 degree moving image received from the moving image providing device 20 or the like into a streaming format and distributes it to the terminal device 30 or a streaming format real time received from the moving image providing device 20 or the like. The 360 degree moving image is distributed to the terminal device 30. In one embodiment, the moving image distributed by the moving image distribution control unit 52 may include a three-dimensional moving image configured to be viewed stereoscopically by the user.
 一実施形態における仮想空間管理部53は、動画の視野全体に対応付けられる仮想空間の管理に関する様々な処理を実行する。例えば、仮想空間管理部53は、ユーザがコメントを入力したときに、このユーザの視野に含まれる仮想空間上の位置を特定し、この特定した位置にコメントを配置する。ここで、一実施形態における360度動画の視野全体に対応付けられる仮想空間及びユーザの視野について、図5を用いて説明する。 The virtual space management unit 53 in one embodiment executes various processes related to management of a virtual space associated with the entire field of view of the moving image. For example, when the user inputs a comment, the virtual space management unit 53 specifies a position in the virtual space included in the user's field of view, and places the comment at the specified position. Here, the virtual space and the user's field of view associated with the entire field of view of the 360-degree moving image in one embodiment will be described with reference to FIG.
 360度動画は、図5に示すように、仮想的な球体Sの内面(の全体又は一部)に視野全体を有する動画として構成されており、この球体Sの中心Cに位置するユーザの視線の方向Aが特定されると、予め定められた視野角θに基づいて、当該ユーザの視野Vが特定される。つまり、360動画では、仮想的な球体Sの中心Cに位置するユーザの視線の方向が特定されると、当該視線の方向に基づく視野に含まれる部分の動画が表示される。図5において、視野Vは曲線として表現されているが、図6に示すように、視野Vは、球体Sの内面の一部の領域である。一実施形態における仮想空間は、この球体Sの内面として構成されており、つまり、360動画の視野全体に対応付けられている。 As shown in FIG. 5, the 360-degree moving image is configured as a moving image having the entire field of view on the inner surface (all or a part) of the virtual sphere S, and the line of sight of the user located at the center C of the sphere S When the direction A is specified, the user's visual field V is specified based on a predetermined viewing angle θ. That is, in the 360 moving image, when the direction of the line of sight of the user located at the center C of the virtual sphere S is specified, the moving image of the portion included in the field of view based on the direction of the line of sight is displayed. In FIG. 5, the visual field V is expressed as a curve, but the visual field V is a partial region of the inner surface of the sphere S as illustrated in FIG. 6. The virtual space in one embodiment is configured as the inner surface of the sphere S, that is, associated with the entire field of view of the 360 moving images.
 一実施形態における仮想空間管理部53は、仮想空間に関する様々な情報を、端末装置30に対して送信する。例えば、仮想空間管理部53は、仮想空間に配置されたコメントに基づくコメントオブジェクトに関する情報を端末装置30に対して送信し得る。 The virtual space management unit 53 in one embodiment transmits various pieces of information regarding the virtual space to the terminal device 30. For example, the virtual space management unit 53 can transmit information related to the comment object based on the comments arranged in the virtual space to the terminal device 30.
 また、仮想空間管理部53は、コメントに対する「Like」の入力を受け付ける。例えば、仮想空間管理部53は、ユーザの注視点にコメントオブジェクトが表示されている状態が、予め定められた有効時間(例えば、10秒)継続したときに、当該コメントオブジェクトに対応するコメントに対する「Like」の入力を端末装置30を介して受け付ける。 Also, the virtual space management unit 53 receives an input of “Like” for the comment. For example, when the state in which the comment object is displayed at the user's point of interest continues for a predetermined effective time (for example, 10 seconds), the virtual space management unit 53 displays “ The input of “Like” is received via the terminal device 30.
 一実施形態における端末装置30は、図2に示すように、動画の再生を制御する再生制御部55と、ユーザによる入力を管理する入力管理部56と、を備える。これらの機能は、CPU31及びメインメモリ32等のハードウェア、並びに、ストレージ35に記憶されている各種プログラムやテーブル等が協働して動作することによって実現され、例えば、ロードしたプログラムに含まれる命令をCPU31が実行することによって実現される。また、図2に例示した端末装置30が有する機能の一部又は全部は、サーバ10と端末装置30とが協働することによって実現され、又は、サーバ10によって実現され得る。 As shown in FIG. 2, the terminal device 30 according to an embodiment includes a reproduction control unit 55 that controls reproduction of a moving image, and an input management unit 56 that manages input by a user. These functions are realized by the cooperation of hardware such as the CPU 31 and the main memory 32, and various programs and tables stored in the storage 35. For example, instructions included in the loaded program This is realized by the CPU 31 executing. In addition, part or all of the functions of the terminal device 30 illustrated in FIG. 2 can be realized by the cooperation of the server 10 and the terminal device 30, or can be realized by the server 10.
 一実施形態における再生制御部55は、動画の再生に関する様々な制御を実行する。例えば、再生制御部55は、サーバ10から受信する360度動画を、ユーザによって特定される視野で端末装置30上に表示する。例えば、再生制御部55は、ユーザによる端末装置30の姿勢、傾き、及び方位等を変更する操作、又は、画面のフリック/ドラッグ操作等に応じてユーザの視線の方向を特定し、360度動画が有する視野全体の動画のうち、ユーザの視線の方向に基づいて定まる視野に含まれる部分の動画を表示する。 The playback control unit 55 in one embodiment executes various controls related to playback of moving images. For example, the reproduction control unit 55 displays the 360 degree moving image received from the server 10 on the terminal device 30 with a field of view specified by the user. For example, the playback control unit 55 identifies the direction of the user's line of sight according to the user's operation to change the attitude, tilt, orientation, and the like of the terminal device 30 or the flick / drag operation of the screen. The moving image of the part included in the visual field determined based on the direction of the user's line of sight is displayed among the moving images of the entire visual field of.
 また、再生制御部55は、サーバ10から受信する仮想空間に関する様々な情報に基づいて、当該仮想空間に関する情報を端末装置30上に表示する。例えば、再生制御部55は、仮想空間に配置されたコメントに関する情報に基づいて、当該コメントに関する情報を端末装置30上に表示する。また、再生制御部55は、当該仮想空間に関する様々な情報に基づいて音を出力し、例えば、仮想空間にコメントが配置されることに応じた音を出力する。 Further, the reproduction control unit 55 displays information related to the virtual space on the terminal device 30 based on various information related to the virtual space received from the server 10. For example, the reproduction control unit 55 displays information related to the comment on the terminal device 30 based on information related to the comment arranged in the virtual space. In addition, the reproduction control unit 55 outputs a sound based on various information related to the virtual space, for example, outputs a sound corresponding to the comment being placed in the virtual space.
 一実施形態における入力管理部56は、ユーザによる入力の管理に関する様々な処理を実行する。例えば、入力管理部56は、ユーザの注視点にコメントが表示されている状態が、予め定められた有効時間継続したことを検出したときに、当該コメントに対する「Like」の入力を認識し、当該「Like」の入力を示す情報をサーバ10に対して送信する。 The input management unit 56 in one embodiment executes various processes related to management of input by the user. For example, when the input management unit 56 detects that the state where the comment is displayed on the user's gaze point has continued for a predetermined effective time, the input management unit 56 recognizes the input of “Like” for the comment, Information indicating the input of “Like” is transmitted to the server 10.
 次に、このような機能を有する一実施形態におけるシステム1の動作について説明する。一実施形態における動画配信サービスを利用するユーザは、端末装置30を介して、動画配信サービスにおいて提供される複数の動画の中から所望の動画を選択して端末装置30上で再生することができる。ユーザがリアルタイムの360度動画を選択すると、端末装置30から当該動画の配信要求を受信したサーバ10は、動画提供装置20等からリアルタイムに受信する360度動画をストリーミング形式で端末装置30に対して配信する。一実施形態において、360度動画を再生するための画面として、異なる画面構成を有する2つの動画再生画面が提供されており、ユーザは、何れかの画面を選択して動画を再生することができる。 Next, the operation of the system 1 in one embodiment having such a function will be described. A user who uses the video distribution service in one embodiment can select a desired video from a plurality of videos provided in the video distribution service via the terminal device 30 and reproduce it on the terminal device 30. . When the user selects a real-time 360-degree video, the server 10 that has received the video distribution request from the terminal device 30 sends the 360-degree video received in real time from the video providing device 20 or the like to the terminal device 30 in a streaming format. To deliver. In one embodiment, two video playback screens having different screen configurations are provided as screens for playback of 360-degree video, and the user can select one of the screens to play the video. .
 図7は、動画再生画面の1つである第1の動画再生画面60の一例である。一実施形態における第1の動画再生画面60は、図示するように、画面全体が、360度動画を特定の視野で表示する表示領域61として構成されている。この表示領域61は、上述したように、360度動画が有する視野全体の動画うち、ユーザによって特定された視野に含まれる部分の動画を表示する。例えば、ユーザによる端末装置30の姿勢、傾き、及び方位等を変更する操作、又は、表示領域61のフリック/ドラッグ操作等に応じてユーザの視野(視線の方向)が変化すると、変化後の視野に含まれる部分の動画が表示領域61に表示される。 FIG. 7 is an example of a first video playback screen 60 that is one of the video playback screens. As shown in the figure, the first moving image playback screen 60 in one embodiment is configured as a display area 61 that displays a 360-degree moving image with a specific field of view. As described above, the display area 61 displays a moving image of a portion included in the visual field specified by the user among the entire visual field of the 360-degree moving image. For example, when the user's field of view (the direction of the line of sight) changes according to the user's operation to change the posture, tilt, orientation, and the like of the terminal device 30 or the flick / drag operation of the display area 61, Is displayed in the display area 61.
 また、第1の動画再生画面60の表示領域61は、ユーザの視野に含まれる部分の動画に重ねて当該視野に含まれる仮想空間に配置されているコメント(コメントオブジェクト)を表示するように構成されている。詳細は後述する。 In addition, the display area 61 of the first moving image playback screen 60 is configured to display comments (comment objects) arranged in a virtual space included in the visual field so as to overlap the moving image of the portion included in the visual field of the user. Has been. Details will be described later.
 一実施形態において、第1の動画再生画面60は、例えば、VR(Virtual Reality)グラス、又は、スマートフォン等を装着するVRヘッドセット等を利用して、360度動画の特定の視野に含まれる動画を立体的に表示するユーザのための画面として構成することができ、この場合、第1の動画再生画面60は、右目用の画面と左目用の画面とに分離して構成され得る。VRグラス、又は、VRヘッドセット等を利用するユーザは、画面のフリック/ドラッグ操作を行うことができないと考えられるから、端末装置30(VRグラス、又は、スマートフォン等)の姿勢、傾き、及び方位等を変更することによって視野を変化させる。 In one embodiment, the first moving image playback screen 60 is a moving image included in a specific field of view of a 360-degree moving image using, for example, a VR (Virtual Reality) glass or a VR headset that is equipped with a smartphone or the like. Can be configured as a screen for a user who displays the image three-dimensionally. In this case, the first moving image playback screen 60 can be configured to be divided into a screen for the right eye and a screen for the left eye. Since it is considered that the user who uses the VR glasses or the VR headset cannot perform the flick / drag operation of the screen, the attitude, inclination, and orientation of the terminal device 30 (the VR glasses, the smartphone, etc.) Change the field of view by changing etc.
 図8は、複数の動画再生画面の1つである第2の動画再生画面70の一例である。一実施形態における第2の動画再生画面70は、図示するように、画面全体が仮想的なステージを含む空間として構成されており、360度動画を特定の視野で表示する表示領域71が仮想的なステージ上に配置されている。この表示領域71は、第1の動画再生画面60の表示領域61と同様に、360度動画が有する視野全体の動画のうち、ユーザによって特定された視野に含まれる部分の動画を表示し、ユーザによる端末装置30の姿勢、傾き、及び方位等を変更する操作、又は、表示領域71のフリック/ドラッグ操作等に応じてユーザの視野が変化すると、変化後の視野に含まれる部分の動画が表示領域71に表示される。 FIG. 8 is an example of a second video playback screen 70 that is one of a plurality of video playback screens. As shown in the drawing, the second moving image playback screen 70 in one embodiment is configured as a space including a virtual stage, and a display area 71 for displaying a 360-degree moving image with a specific field of view is virtual. Is placed on the stage. Similar to the display area 61 of the first moving image playback screen 60, the display area 71 displays a portion of the moving image included in the visual field specified by the user out of the entire visual field of the 360-degree moving image. When the user's field of view changes in response to an operation for changing the orientation, tilt, orientation, etc. of the terminal device 30 or a flick / drag operation of the display area 71, a moving image of a part included in the changed field of view is displayed. It is displayed in area 71.
 また、第2の動画再生画面70では、図示するように、同じ動画を閲覧(再生)しているユーザのアバター110が、仮想的なステージの手前の空間に相当するアバター表示領域76に配置されている。 Further, on the second video playback screen 70, as shown in the figure, the avatar 110 of the user who is viewing (playing back) the same video is arranged in the avatar display area 76 corresponding to the space before the virtual stage. ing.
 更に、第2の動画再生画面70は、図示するように、画面下端に、コメント入力領域72と、「送る」と表示されたコメント送信ボタン74と、が配置されている。ユーザが、コメント入力領域72に所望の文字列等をコメントとして入力した上で、コメント送信ボタン74を選択すると、入力されたコメントがサーバ10に送信される。ここで、コメントの受信に応じてサーバ10が実行するコメント配置処理について説明する。 Furthermore, as shown in the drawing, the second moving image playback screen 70 has a comment input area 72 and a comment transmission button 74 displayed as “Send” at the bottom of the screen. When the user inputs a desired character string or the like as a comment in the comment input area 72 and selects the comment transmission button 74, the input comment is transmitted to the server 10. Here, a comment placement process executed by the server 10 in response to receiving a comment will be described.
 図9は、一実施形態におけるコメント配置処理の一例を示すフロー図である。コメント配置処理では、まず、図示するように、入力されたコメントを配置する仮想空間上の位置を特定する(ステップS110)。一実施形態において、コメントを配置する位置は、コメントを入力したユーザの(コメントを入力したときの)視野に含まれる仮想空間上の位置となるように特定される。コメントを入力したユーザの視野に関する情報(例えば、視線の方向)は、例えば、入力されたコメントと共に端末装置30から受信し得る。 FIG. 9 is a flowchart showing an example of comment placement processing in an embodiment. In the comment placement process, first, as shown in the figure, the position on the virtual space where the input comment is placed is specified (step S110). In one embodiment, the position where the comment is arranged is specified to be a position on the virtual space included in the visual field (when the comment is input) of the user who input the comment. Information regarding the visual field of the user who input the comment (for example, the direction of the line of sight) can be received from the terminal device 30 together with the input comment, for example.
 本発明の様々な実施形態においては、様々な基準を適用して、ユーザの視野に含まれる任意の位置が、コメントを配置する位置として特定され得る。一実施形態においては、コメントを配置する位置は、ユーザの注視点を含む注視領域の範囲外となるように特定される。図10Aは、一実施形態におけるユーザの注視点FP及び注視領域FRを説明するための図である。一実施形態では、図示するように、ユーザの視界Vの中心(ユーザの視線の方向と仮想空間との交点)を、ユーザが注視している注視点FPと定義し、この注視点FPを中心とすると共に所定の長さの半径を有する円形の領域を注視領域FRと定義する。コメントを配置する位置は、例えば、注視点FPから移動する(離れる)方向を特定し、この特定された方向に注視領域FRの半径に相当する距離だけ移動した位置として特定される。注視点FPから移動する方向は、例えば、ランダムに特定される。図10Bに示すように、例えば、第2の動画再生画面70の表示領域71(視野V)に表示されている動画に現れるビルを見ている(注視点FPに当該ビルが表示されている)ユーザがコメントを入力した場合、注視領域FRの範囲外の位置は、当該ビルから離れる方向に移動した位置となり、当該ビルに重なり難い。 In various embodiments of the present invention, by applying various criteria, an arbitrary position included in the user's field of view can be specified as a position where a comment is placed. In one embodiment, the position where the comment is arranged is specified to be outside the range of the gaze area including the user's gaze point. FIG. 10A is a diagram for describing a user's gaze point FP and a gaze area FR according to an embodiment. In one embodiment, as illustrated, the center of the user's field of view V (the intersection of the direction of the user's line of sight and the virtual space) is defined as the gazing point FP that the user is gazing at, and the gazing point FP is the center. In addition, a circular area having a predetermined radius is defined as a gaze area FR. For example, the position where the comment is arranged is specified as a position in which the direction of moving (separating from) the gazing point FP is specified and moved in the specified direction by a distance corresponding to the radius of the gazing area FR. The direction of moving from the gazing point FP is specified at random, for example. As shown in FIG. 10B, for example, a building that appears in the moving image displayed in the display area 71 (view V) of the second moving image playback screen 70 is being viewed (the building is displayed at the gazing point FP). When the user inputs a comment, the position outside the range of the gaze area FR is a position moved in a direction away from the building, and is difficult to overlap the building.
 こうしてコメントを配置する位置を特定すると、次に、特定された仮想空間上の位置にコメントを配置して(ステップS120)、このコメント配置処理を終了する。具体的には、当該コメントに関する情報をコメント管理テーブル51bに登録する。ここで、コメント管理テーブル51bに登録される情報のうち消去時刻には、現在時刻に既定の配置継続時間(例えば、30秒)を加算した時刻が設定される。 When the position where the comment is arranged is specified in this way, next, the comment is arranged at the specified position in the virtual space (step S120), and this comment arrangement processing is ended. Specifically, information related to the comment is registered in the comment management table 51b. Here, of the information registered in the comment management table 51b, a time obtained by adding a predetermined arrangement duration (for example, 30 seconds) to the current time is set as the deletion time.
 ここで、コメントが仮想空間に配置されたときの、端末装置30の動作について説明する。まず、第1の動画再生画面60の表示について説明する。一実施形態における第1の動画再生画面60の表示領域61は、上述したように、360度動画を特定の視野で表示すると共に、当該視野に含まれる仮想空間に配置されているコメント(コメントオブジェクト)を動画に重ねて表示するように構成されている。図11Aは、コメントが配置された仮想空間上の位置を視野に含むユーザの端末装置30に表示される第1の動画再生画面60を例示する。図示するように、第1の動画再生画面60の表示領域61において、コメントオブジェクト114が、コメントが配置された仮想空間上の位置に動画に重ねて表示されている。 Here, the operation of the terminal device 30 when comments are arranged in the virtual space will be described. First, display of the first moving image playback screen 60 will be described. As described above, the display area 61 of the first moving image playback screen 60 according to the embodiment displays a 360-degree moving image with a specific field of view and comments (comment objects) arranged in the virtual space included in the field of view. ) Overlaid on the video. FIG. 11A illustrates the first moving image playback screen 60 displayed on the user terminal device 30 including the position in the virtual space in which the comment is arranged in the field of view. As shown in the drawing, in the display area 61 of the first moving image playback screen 60, the comment object 114 is displayed so as to overlap the moving image at a position on the virtual space where the comment is arranged.
 図11Bは、コメントオブジェクト114の詳細を例示する。一実施形態におけるコメントオブジェクト114は、コメントを入力したユーザのアバター110と、吹き出しオブジェクト112と、によって構成される。吹き出しオブジェクト112には、コメントの内容(図11Bの例では「素晴らしい!」)、コメントを入力したユーザのニックネーム(図11Bの例では「by XXX」)、及び、コメントに対して入力されているLike数が表示される。 FIG. 11B illustrates details of the comment object 114. In one embodiment, the comment object 114 includes a user's avatar 110 that has input a comment, and a balloon object 112. In the balloon object 112, the content of the comment (“Excellent!” In the example of FIG. 11B), the nickname of the user who input the comment (“by XXX” in the example of FIG. 11B), and the comment are input. The number of Likes is displayed.
 図11Aに例示するように、動画に現れるビルを見ていた(注視点FPに当該ビルが表示されていた)ユーザがコメントを入力した場合、コメントは注視点FPから離れた位置に配置されるから、対応するコメントオブジェクト114は当該ビルから離れた位置に表示される。 As illustrated in FIG. 11A, when a user who is looking at a building that appears in a moving image (the building is displayed at the gazing point FP) inputs a comment, the comment is arranged at a position away from the gazing point FP. The corresponding comment object 114 is displayed at a position away from the building.
 図12は、複数のユーザの各々によって入力されたコメントに基づく複数のコメントオブジェクト114が表示領域61に表示されている第1の動画再生画面60を例示する。例えば、同じビルを見ていた複数のユーザが同じようなタイミングでコメントを入力した場合、コメントが配置される位置を特定するときの注視点FPから離れる方向は、コメント毎に(例えばランダムに)特定されるから、複数のコメントオブジェクト114が、注視点FPから同じ方向に移動した位置で重ねて表示されることが抑制される。 FIG. 12 illustrates the first moving image playback screen 60 in which a plurality of comment objects 114 based on comments input by each of a plurality of users are displayed in the display area 61. For example, when a plurality of users who are looking at the same building input a comment at the same timing, the direction away from the gazing point FP when specifying the position where the comment is arranged is for each comment (for example, randomly) Since it is specified, it is suppressed that the plurality of comment objects 114 are displayed in an overlapping manner at a position moved in the same direction from the gazing point FP.
 図13は、コメントが配置された位置を視野に含むユーザの端末装置30上に表示される第2の動画再生画面70を例示する。第2の動画再生画面70では、上述したように、同じ動画を閲覧するユーザのアバター110がアバター領域76に表示されており、アバター領域76に表示されているアバター110のうち、仮想空間に配置されているコメントを入力したユーザのアバター110に、上述した吹き出しオブジェクト112が付加されて表示される。 FIG. 13 exemplifies a second moving image reproduction screen 70 displayed on the user terminal device 30 including the position where the comment is arranged in the field of view. On the second video playback screen 70, as described above, the avatar 110 of the user who views the same video is displayed in the avatar area 76, and the avatar 110 displayed in the avatar area 76 is arranged in the virtual space. The above-described balloon object 112 is added to the avatar 110 of the user who has input the comment being displayed.
 このように、一実施形態では、仮想空間に配置されたコメントは、コメントが配置された位置を視野に含むユーザの端末装置30において表示される。具体体には、第1の動画再生画面60の場合には、表示領域61において、コメントが配置された仮想空間上の位置にコメントオブジェクト114が動画に重ねて表示され、第2の動画再生画面70の場合には、アバター表示領域76において、コメントを入力したユーザのアバター110に吹き出しオブジェト112が付加されて表示される。 Thus, in one embodiment, the comment placed in the virtual space is displayed on the terminal device 30 of the user including the position where the comment is placed in the field of view. Specifically, in the case of the first moving image playback screen 60, the comment object 114 is displayed in the display area 61 so as to be superimposed on the moving image at the position in the virtual space where the comment is arranged. In the case of 70, the balloon object 112 is added to the avatar 110 of the user who has input the comment in the avatar display area 76 and displayed.
 また、一実施形態では、コメントが仮想空間に配置されると、同じ動画を閲覧する複数のユーザの各々の端末装置30において、コメントが配置された位置とユーザの視野(注視点)との位置関係に基づく効果音が出力される。端末装置30において出力される効果音は、例えば、ユーザの視野(注視点)とコメントが配置された位置とが近いほど、音量が大きくなるように構成され得る。 In one embodiment, when a comment is arranged in a virtual space, the position of the comment and the position of the user's field of view (gaze point) in each terminal device 30 of a plurality of users viewing the same video Sound effects based on the relationship are output. For example, the sound effect output from the terminal device 30 may be configured such that the sound volume increases as the user's field of view (gaze point) is closer to the position where the comment is placed.
 図14は、コメントが配置された位置とユーザの視野との位置関係に基づいて設定される効果音の音量を例示する。例えば、図示するように、ユーザの視野Vに含まれる位置にコメントC1が配置されたときにこのユーザの端末装置30において出力される効果音S1は、ユーザの視野Vに含まれない位置にコメントC2、C3が配置されたときの効果音S2、S3よりも音量が大きい。また、例えば、コメントC2が配置されたときの効果音S2は、コメントC3(コメントが配置された位置とユーザの視野Vとの距離がコメントC2よりも遠い)が配置されたときの効果音S3よりも音量が大きい。ここで、コメントが配置された位置とユーザの視野との距離は、球体Sの中心を基準としてコメントが配置された方向とユーザの視線の方向との間の角度(図14のθ1、θ2)に基づいて特定され得る。このように、ユーザの視野(注視点)とコメントが配置された位置とが近いほど、音量が大きくなるように構成することにより、ユーザは、臨場感を感じることができる。 FIG. 14 illustrates the volume of the sound effect set based on the positional relationship between the position where the comment is placed and the user's field of view. For example, as shown in the figure, when the comment C1 is arranged at a position included in the user's visual field V, the sound effect S1 output from the terminal device 30 of the user is commented at a position not included in the user's visual field V. The volume is larger than the sound effects S2 and S3 when C2 and C3 are arranged. For example, the sound effect S2 when the comment C2 is arranged is the sound effect S3 when the comment C3 (the distance between the position where the comment is arranged and the user's visual field V is farther than the comment C2) is arranged. Is louder than. Here, the distance between the position where the comment is arranged and the visual field of the user is the angle between the direction in which the comment is arranged and the direction of the user's line of sight with respect to the center of the sphere S (θ1, θ2 in FIG. 14). Can be identified based on Thus, the user can feel a sense of realism by configuring the volume so that the closer the user's field of view (gaze point) and the position where the comment is placed, the greater the volume.
 また、例えば、効果音は、ユーザの視野(注視点)を基準としてコメントが配置された位置の方向(例えば、右方向又は左方向)の音として出力され、例えば、図14の例では、ユーザの視野よりも左側の位置に配置されたコメントC2の配置に応じた効果音S2は左方向から聞こえる音として出力され、ユーザの視野よりも右側の位置に配置されたコメントC3の配置に応じた効果音S3は右方向から聞こえる音として出力される。 Further, for example, the sound effect is output as a sound in a direction (for example, right direction or left direction) where the comment is arranged with reference to the user's visual field (gaze point). For example, in the example of FIG. The sound effect S2 corresponding to the arrangement of the comment C2 arranged on the left side of the visual field is output as a sound audible from the left direction, and corresponds to the arrangement of the comment C3 arranged on the right side of the user's visual field. The sound effect S3 is output as a sound that can be heard from the right direction.
 このように、一実施形態では、コメントが仮想空間に配置されると、同じ動画を閲覧する複数のユーザの各々の端末装置30において効果音が出力されるから、ユーザは、視野に含まれない位置に配置されるコメントを含むコメントの入力を知ることができる。 Thus, in one embodiment, when a comment is placed in a virtual space, a sound effect is output from each terminal device 30 of a plurality of users who view the same video, so the user is not included in the field of view. It is possible to know the input of a comment including a comment arranged at a position.
 一実施形態において、仮想空間に配置されたコメントは、コメント毎に設定された消去時刻になったときに消去(配置が解除)される。コメントが消去されると、第1の動画再生画面60の表示領域61において動画に重ねて表示されていたコメントオブジェクト114も消去される(表示されなくなる)。 In one embodiment, a comment placed in the virtual space is deleted (arranged) when an erasing time set for each comment is reached. When the comment is erased, the comment object 114 displayed on the moving image in the display area 61 of the first moving image reproduction screen 60 is also erased (cannot be displayed).
 一実施形態において、第2の動画再生画面70のアバター表示領域76において表示されていた吹き出しオブジェクト112は、コメントの消去に応じて消去されるように構成しても良いし、コメントの消去とは異なるタイミングで消去されるようにしても良い。即ち、一実施形態において、第1の動画再生画面60におけるコメントオブジェクト114の消去と第2の動画再生画面70における吹き出しオブジェクト112の消去とは、独立して制御され得る。 In one embodiment, the balloon object 112 displayed in the avatar display area 76 of the second video playback screen 70 may be configured to be deleted in response to the deletion of the comment. It may be erased at different timings. That is, in one embodiment, the deletion of the comment object 114 on the first moving image playback screen 60 and the deletion of the balloon object 112 on the second moving image playback screen 70 can be controlled independently.
 ここで、一実施形態においては、受け付けたLikesの数が多いコメントほど、コメントが消去されるまでの時間が長くなるように構成されている。具体的には、例えば、Likes数が所定数(例えば、10)に達する毎に所定の追加時間(例えば、10秒)が、消去時刻に加算されるように構成されている。ここで、一実施形態におけるコメントに対するLikeの入力に関する動作について説明する。 Here, in one embodiment, a comment having a larger number of accepted Likes is configured to have a longer time until the comment is deleted. Specifically, for example, every time the number of Likes reaches a predetermined number (for example, 10), a predetermined additional time (for example, 10 seconds) is added to the erasing time. Here, the operation related to the input of Like for the comment in the embodiment will be described.
 まず、第1の動画再生画面60を介したLikeの入力に関する動作について説明する。一実施形態においては、コメントオブジェクト114が、ユーザの注視点に位置する状態が予め定められた有効時間(例えば、10秒)継続すると、当該コメントオブジェクト114に対応するコメントに対するLikeの入力が受け付けられる。 First, the operation related to the input of Like via the first moving image playback screen 60 will be described. In one embodiment, if the state where the comment object 114 is located at the user's point of interest continues for a predetermined effective time (for example, 10 seconds), a Like input for the comment corresponding to the comment object 114 is accepted. .
 例えば、図15に例示するように、コメントオブジェクト114aが注視点(表示領域61の中心)に位置する状態が有効時間継続すると、当該コメントオブジェクト114aに対応するコメントに対する「Like」が受け付けられる。図16は、コメントオブジェクト114が注視点に位置する状態が、有効時間継続するまでのコメントオブジェクト114の表示の変化を例示する。一実施形態においては、有効時間よりも短い一定時間(例えば、3秒)が経過すると、吹き出しオブジェクト112の背景色が変化すると共に進捗ゲージ113が付加される(i)。その後、時間の経過に従って進捗ゲージ113の表示が変化し、有効時間に達すると(ii)、進捗ゲージ113が消去されると共に吹き出しオブジェクト112の背景色が元に戻り、「Like」の入力が受け付けられて、Like数の表示が更新される(iii)。 For example, as illustrated in FIG. 15, when the state in which the comment object 114a is located at the point of interest (the center of the display area 61) continues for a valid time, “Like” for the comment corresponding to the comment object 114a is accepted. FIG. 16 exemplifies a change in the display of the comment object 114 until the state where the comment object 114 is positioned at the point of gaze continues for an effective time. In one embodiment, when a certain time (for example, 3 seconds) shorter than the valid time elapses, the background color of the balloon object 112 changes and the progress gauge 113 is added (i). Thereafter, the display of the progress gauge 113 changes with the passage of time, and when the valid time is reached (ii), the progress gauge 113 is erased and the background color of the balloon object 112 is restored to the original, and the input of “Like” is accepted. And the display of the number of Likes is updated (iii).
 端末装置30を介して「Like」を受け付けると、サーバ10は、コメント管理テーブル51bのLike数を更新する。また、上述したように、Like数の増加に応じて消去時刻が更新(加算)され得る。 Upon accepting “Like” via the terminal device 30, the server 10 updates the number of Likes in the comment management table 51b. Further, as described above, the erase time can be updated (added) as the number of Likes increases.
 次に、第2の動画再生画面70を介したLikeの入力に関する動作について説明する。一実施形態においては、第2の動画再生画面70のアバター表示領域76に表示されている吹き出しオブジェクト112をユーザが選択することに応じて、当該吹き出しオブジェクト112に対応するコメントに対する「Like」の入力が受け付けられる。「Like」の入力が受け付けられてLike数の表示が更新される動作等は、上述した第1の動画再生画面60の場合と同様である。 Next, an operation related to Like input via the second moving image playback screen 70 will be described. In one embodiment, in response to the user selecting the balloon object 112 displayed in the avatar display area 76 of the second video playback screen 70, the input of “Like” for the comment corresponding to the balloon object 112 Is accepted. The operation of accepting the input of “Like” and updating the display of the number of Likes is the same as in the case of the first moving image playback screen 60 described above.
 上述した一実施形態において、リアルタイムにストリーミング形式で配信(ライブストリーミング配信)した360度動画をサーバ10の情報記憶部51等に記憶しておき、当該記憶した動画を、端末装置30からの要求に応じて、後から再生できるように構成し得る。この場合、ライブストリーミング配信時に入力されたコメントは、コメント管理テーブル51bで管理されている情報に従って仮想空間に配置される。具体的には、コメント管理テーブル51bの配置時刻、配置位置、消去時刻に従って、対応するコメントが、仮想空間上に配置されて、その後、消去される。 In the embodiment described above, a 360-degree moving image distributed in a streaming format in real time (live streaming distribution) is stored in the information storage unit 51 of the server 10, and the stored moving image is used as a request from the terminal device 30. Accordingly, it can be configured so that it can be played back later. In this case, the comment input at the time of live streaming distribution is arranged in the virtual space according to the information managed in the comment management table 51b. Specifically, corresponding comments are arranged in the virtual space according to the arrangement time, arrangement position, and deletion time of the comment management table 51b, and then deleted.
 上述した一実施形態において、コメントの入力は、第2の動画再生画面70を介して行われるように構成したが、第1の動画再生画面60に相当する画面を、コメントの入力が行えるように構成することもできる。図17に例示する他の実施形態における第1の動画再生画面160は、上述した第1の動画再生画面60と同様の表示領域61を有し、この表示領域61の下端に、第2の動画再生画面70と同様のコメント入力領域72及びコメント送信ボタン74が配置されている。また、本発明の様々な実施形態において、コメントの入力を、音声入力の技術を適用して実現することもできる。 In the above-described embodiment, the comment is input via the second video playback screen 70. However, a comment corresponding to the first video playback screen 60 can be input. It can also be configured. The first moving image playback screen 160 in another embodiment illustrated in FIG. 17 has a display area 61 similar to the first moving image playback screen 60 described above, and the second moving image is displayed at the lower end of the display area 61. A comment input area 72 and a comment transmission button 74 similar to those on the reproduction screen 70 are arranged. In various embodiments of the present invention, comment input can also be realized by applying a voice input technique.
 上述した一実施形態では、ユーザから入力されて仮想空間に配置される入力情報としてコメントを例示したが、本発明の実施形態において、入力情報はコメントに限られない。例えば、スタンプ、アイコン等のユーザから入力され得る様々な情報が、入力情報に含まれ得る。 In the above-described embodiment, a comment is exemplified as input information that is input from a user and arranged in the virtual space. However, in the embodiment of the present invention, the input information is not limited to a comment. For example, various information that can be input from the user, such as a stamp and an icon, can be included in the input information.
 上述した一実施形態では、コメントの配置に応じて出力する効果音を、ユーザの視野とコメントが配置された位置とが近いほど音量が大きくなるように構成したが、本発明の様々な実施形態は、コメントが配置された位置とユーザの視野との位置関係に基づいて音量が変化するものに限定されず、これに代えて、又は、これに加えて、効果音の音色、音の高さ等が変化し得る。 In the above-described embodiment, the sound effect output according to the arrangement of the comment is configured such that the sound volume increases as the user's field of view and the position where the comment is arranged are closer to each other. Is not limited to those whose volume changes based on the positional relationship between the position where the comment is placed and the user's field of view, but instead of or in addition to this, the timbre and pitch of the sound effect Etc. can vary.
 以上説明した本発明の様々な実施形態は、複数のユーザの各々の端末装置30に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている動画を、複数のユーザの各々の視野で表示し、ユーザからコメントを受け付けたときに、このユーザの視野に含まれる仮想空間上の位置を特定して配置し、コメントの配置に応じて、配置された位置を視野に含むユーザの端末装置30にコメントを表示する。従って、コメントが表示される端末装置30のユーザは、コメントを入力したユーザの視野と近い視野を有することになるから、表示されたコメントの内容を理解し易い。このように、本発明の実施形態は、ユーザ間で視野の異なり得る動画において入力されるコメント等の情報を適切に表示することができる。 In the various embodiments of the present invention described above, each terminal device 30 of a plurality of users is configured with a plurality of moving images that are configured as a moving image having a wide-angle visual field and a virtual space is associated with the entire visual field. When the user's field of view is displayed and a comment is received from the user, the position in the virtual space included in the user's field of view is specified and arranged, and the arranged position is viewed according to the arrangement of the comment. The comment is displayed on the terminal device 30 of the user included. Therefore, the user of the terminal device 30 in which the comment is displayed has a field of view close to the field of view of the user who has input the comment. As described above, the embodiment of the present invention can appropriately display information such as a comment input in a moving image that may have different fields of view among users.
 本明細書で説明された処理及び手順は、実施形態中で明示的に説明されたもの以外にも、ソフトウェア、ハードウェアまたはこれらの任意の組み合わせによって実現される。より具体的には、本明細書で説明される処理及び手順は、集積回路、揮発性メモリ、不揮発性メモリ、磁気ディスク、光ストレージ等の媒体に、当該処理に相当するロジックを実装することによって実現される。また、本明細書で説明される処理及び手順は、それらの処理・手順をコンピュータプログラムとして実装し、各種のコンピュータに実行させることが可能である。 The processes and procedures described in this specification are realized by software, hardware, or any combination thereof other than those explicitly described in the embodiment. More specifically, the processes and procedures described in this specification are performed by mounting logic corresponding to the processes on a medium such as an integrated circuit, a volatile memory, a nonvolatile memory, a magnetic disk, or an optical storage. Realized. Further, the processes and procedures described in this specification can be implemented as a computer program and executed by various computers.
 本明細書中で説明される処理及び手順が単一の装置、ソフトウェア、コンポーネント、モジュールによって実行される旨が説明されたとしても、そのような処理または手順は複数の装置、複数のソフトウェア、複数のコンポーネント、及び/又は複数のモジュールによって実行され得る。また、本明細書中で説明されるデータ、テーブル、又はデータベースが単一のメモリに格納される旨説明されたとしても、そのようなデータ、テーブル、又はデータベースは、単一の装置に備えられた複数のメモリまたは複数の装置に分散して配置された複数のメモリに分散して格納され得る。さらに、本明細書において説明されるソフトウェアおよびハードウェアの要素は、それらをより少ない構成要素に統合して、またはより多い構成要素に分解することによって実現することも可能である。 Even if the processes and procedures described herein are described as being performed by a single device, software, component, or module, such processes or procedures may be performed by multiple devices, multiple software, multiple Component and / or multiple modules. In addition, even though the data, tables, or databases described herein are described as being stored in a single memory, such data, tables, or databases are provided on a single device. Alternatively, the data can be distributed and stored in a plurality of memories or a plurality of memories arranged in a plurality of devices. Further, the software and hardware elements described herein may be implemented by integrating them into fewer components or by decomposing them into more components.
 本明細書において、発明の構成要素が単数もしくは複数のいずれか一方として説明された場合、又は、単数もしくは複数のいずれとも限定せずに説明された場合であっても、文脈上別に解すべき場合を除き、当該構成要素は単数又は複数のいずれであってもよい。 In the present specification, when the constituent elements of the invention are described as one or a plurality, or when they are described without being limited to one or a plurality of cases, they should be understood separately in context. The component may be either singular or plural.
 1 システム
 10 サーバ
 20 動画提供装置
 30 端末装置
 40 通信網
 51 情報記憶部
 52 動画配信制御部
 53 仮想空間管理部
 55 再生制御部
 56 入力管理部
 60、160 第1の動画再生画面
 70 第2の動画再生画面
DESCRIPTION OF SYMBOLS 1 System 10 Server 20 Movie providing device 30 Terminal device 40 Communication network 51 Information storage unit 52 Movie distribution control unit 53 Virtual space management unit 55 Playback control unit 56 Input management unit 60, 160 First movie playback screen 70 Second movie Playback screen

Claims (16)

  1.  動画を特定の視野で表示するシステムであって、
     1又は複数のコンピュータプロセッサを備え、
     前記1又は複数のコンピュータプロセッサは、読取可能な命令を実行することに応じて、
     第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、
     前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、
     前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を実行する、
     システム。
    A system that displays video with a specific field of view,
    Comprising one or more computer processors;
    In response to executing the readable instructions, the one or more computer processors
    A specific video that is configured as a video having a wide-angle field of view and a virtual space is associated with the entire field of view on each of the terminal devices of the plurality of users including the first and second users. Displaying in each field of view;
    When the first input information is received from the first user, the first position on the virtual space included in the visual field of the first user is specified, and the first position is set to the first position. Placing input information; and
    Displaying the first input information on the terminal device of the second user including the first position in the field of view in accordance with the arrangement of the first input information.
    system.
  2.  請求項1に記載のシステムであって、
     前記特定の動画は、少なくとも水平方向に360度の視野を有する動画として構成され、
     前記仮想空間は、仮想的な球体の内面として構成される、
     システム。
    The system of claim 1, comprising:
    The specific moving image is configured as a moving image having a visual field of 360 degrees in at least the horizontal direction,
    The virtual space is configured as an inner surface of a virtual sphere.
    system.
  3.  前記入力情報は、コメントである請求項1又は2に記載のシステム。 The system according to claim 1 or 2, wherein the input information is a comment.
  4.  前記入力情報を表示するステップは、前記第1の入力情報を、前記仮想空間上の前記第1の位置に前記特定の動画に重ねて表示する請求項1ないし3何れかに記載のシステム。 The system according to any one of claims 1 to 3, wherein in the step of displaying the input information, the first input information is displayed so as to overlap the specific moving image at the first position in the virtual space.
  5.  前記配置するステップは、前記第1のユーザの注視点を含む第1の範囲外の位置となるように、前記第1の位置を特定することを含む請求項1ないし4何れかに記載のシステム。 5. The system according to claim 1, wherein the placing step includes specifying the first position so that the position is outside a first range including a gazing point of the first user. .
  6.  前記配置するステップは、前記第1のユーザの注視点から移動する方向を特定し、特定した方向に移動した位置であって、かつ、前記第1の範囲外の位置となるように、前記第1の位置を特定することを含む請求項1ないし5何れかに記載のシステム。 The arranging step specifies the direction of movement from the gazing point of the first user, the position moved in the specified direction, and the position outside the first range. 6. A system according to any of claims 1 to 5, comprising identifying the location of one.
  7.  前記1又は複数のコンピュータプロセッサは、更に、前記第2のユーザの注視点に前記第1の入力情報が表示される状態が第1の時間継続したときに、前記第1の入力情報に対する支持を受け付けるステップを実行する請求項4ないし6何れかに記載のシステム。 The one or more computer processors further support the first input information when the first input information is displayed on the second user's gaze point for a first time. The system according to claim 4, wherein the receiving step is executed.
  8.  前記注視点は、ユーザの視野の略中心である請求項5ないし7何れかに記載のシステム。 The system according to any one of claims 5 to 7, wherein the gazing point is a substantial center of a user's visual field.
  9.  前記1又は複数のコンピュータプロセッサは、更に、前記第1の位置に配置されてから第2の時間を経過したときに、前記第1の入力情報の配置を解除するステップを実行する請求項1ないし8何れかに記載のシステム。 The one or more computer processors further execute a step of releasing the arrangement of the first input information when a second time elapses after being arranged at the first position. 8. The system according to any one of 8.
  10.  請求項9に記載のシステムであって、
     前記1又は複数のコンピュータプロセッサは、更に、前記第1の入力情報に対する支持を受け付けるステップを実行し、
     前記解除するステップは、受け付けた支持の数が多いほど前記第2の時間が長くなるように、前記第1の入力情報の配置を解除することを含む、
     システム。
    10. The system according to claim 9, wherein
    The one or more computer processors further execute a step of accepting support for the first input information;
    The step of releasing includes releasing the arrangement of the first input information so that the second time becomes longer as the number of received supports increases.
    system.
  11.  前記1又は複数のコンピュータプロセッサは、更に、前記第1の入力情報の配置に応じて、前記複数のユーザの各々の端末装置に、前記複数のユーザの各々の視野と前記第1の位置との位置関係に基づく音を出力させるステップを実行する請求項1ないし10何れかに記載のシステム。 The one or more computer processors may further cause each terminal device of the plurality of users to receive the visual field and the first position of the plurality of users according to the arrangement of the first input information. The system according to claim 1, wherein a step of outputting a sound based on the positional relationship is executed.
  12.  前記出力させるステップは、ユーザの視野が前記第1の位置に近いほど大きい音量となるように、音を出力させることを含む請求項11に記載のシステム。 The system according to claim 11, wherein the outputting step includes outputting a sound so that the volume of sound increases as the visual field of the user is closer to the first position.
  13.  前記出力させるステップは、前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に第1の音を出力させると共に、前記複数のユーザに含まれる前記第1の位置を視野に含まない第3のユーザの端末装置に、前記第1の音とは異なる第2の音を出力させることを含む請求項11又は12に記載のシステム。 The step of outputting causes the plurality of users to output a first sound to the terminal device of the second user including the first position in the field of view according to the arrangement of the first input information. The system according to claim 11, further comprising: causing a terminal device of a third user who does not include the first position included in the visual field to output a second sound different from the first sound.
  14.  前記特定の動画を表示するステップは、前記特定の動画をライブストリーミングによって前記複数のユーザの各々の端末装置に配信することを含む請求項1ないし13何れかに記載のシステム。 14. The system according to claim 1, wherein the step of displaying the specific moving image includes delivering the specific moving image to each terminal device of the plurality of users by live streaming.
  15.  1又は複数のコンピュータによって実行され、動画を特定の視野で表示する方法であって、
     第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、
     前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、
     前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を備える、
     方法。
    A method executed by one or more computers to display a moving image with a specific field of view,
    A specific video that is configured as a video having a wide-angle field of view and a virtual space is associated with the entire field of view on each of the terminal devices of the plurality of users including the first and second users. Displaying in each field of view;
    When the first input information is received from the first user, the first position on the virtual space included in the visual field of the first user is specified, and the first position is set to the first position. Placing input information; and
    Displaying the first input information on the terminal device of the second user including the first position in the field of view in accordance with the arrangement of the first input information.
    Method.
  16.  動画を特定の視野で表示するプログラムであって、
     1又は複数のコンピュータ上で実行されることに応じて、前記1又は複数のコンピュータに、
     第1及び第2のユーザを含む複数のユーザの各々の端末装置に、広角の視野を有する動画として構成されると共に視野全体に仮想空間が対応付けられている特定の動画を、前記複数のユーザの各々の視野で表示するステップと、
     前記第1のユーザから第1の入力情報を受け付けたときに、前記第1のユーザの視野に含まれる前記仮想空間上の第1の位置を特定し、前記第1の位置に前記第1の入力情報を配置するステップと、
     前記第1の入力情報の配置に応じて、前記第1の位置を視野に含む前記第2のユーザの端末装置に、前記第1の入力情報を表示するステップと、を実行させる、
    プログラム。
    A program that displays a video with a specific field of view,
    In response to being executed on one or more computers, the one or more computers,
    A specific video that is configured as a video having a wide-angle field of view and a virtual space is associated with the entire field of view on each of the terminal devices of the plurality of users including the first and second users. Displaying in each field of view;
    When the first input information is received from the first user, the first position on the virtual space included in the visual field of the first user is specified, and the first position is set to the first position. Placing input information; and
    Displaying the first input information on the terminal device of the second user including the first position in the field of view according to the arrangement of the first input information.
    program.
PCT/JP2016/071040 2015-08-20 2016-07-15 System, method and program for displaying moving image with specific field of view WO2017029918A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015162720A JP2017041780A (en) 2015-08-20 2015-08-20 System for displaying moving image in specific visual field, method and program
JP2015-162720 2015-08-20

Publications (1)

Publication Number Publication Date
WO2017029918A1 true WO2017029918A1 (en) 2017-02-23

Family

ID=58050792

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/071040 WO2017029918A1 (en) 2015-08-20 2016-07-15 System, method and program for displaying moving image with specific field of view

Country Status (2)

Country Link
JP (1) JP2017041780A (en)
WO (1) WO2017029918A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6343779B1 (en) * 2017-04-28 2018-06-20 株式会社コナミデジタルエンタテインメント Server apparatus and computer program used therefor
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7139681B2 (en) * 2018-05-14 2022-09-21 富士通株式会社 Control program, control method, control device and control server
JP7356827B2 (en) * 2019-06-26 2023-10-05 株式会社コロプラ Program, information processing method, and information processing device
US11228737B2 (en) 2019-07-31 2022-01-18 Ricoh Company, Ltd. Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium
JP7346983B2 (en) * 2019-07-31 2023-09-20 株式会社リコー Display terminal, remote control system, display control method and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014183380A (en) * 2013-03-18 2014-09-29 Nintendo Co Ltd Information processing program, information processing device, information processing system, panoramic moving image display method, and data structure of control data
JP2015018013A (en) * 2013-07-08 2015-01-29 株式会社リコー Display controller, program, and recording medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014183380A (en) * 2013-03-18 2014-09-29 Nintendo Co Ltd Information processing program, information processing device, information processing system, panoramic moving image display method, and data structure of control data
JP2015018013A (en) * 2013-07-08 2015-01-29 株式会社リコー Display controller, program, and recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Dark Souls 2' kara ''Yurui Tsunagari'' no Enshutsu Yoso o Matomete Shokai! Gen'ei ya Kekkon ga Egaku Dokutoku no Online Play ga Sarani Kyoka", DENGEKI ONLINE, 18 November 2013 (2013-11-18), XP055365531, Retrieved from the Internet <URL:http://dengekionline.com/elem/000/000/753/753375> [retrieved on 20151112] *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6343779B1 (en) * 2017-04-28 2018-06-20 株式会社コナミデジタルエンタテインメント Server apparatus and computer program used therefor
WO2018198946A1 (en) * 2017-04-28 2018-11-01 株式会社コナミデジタルエンタテインメント Server device and computer program for use therewith
JP2018191064A (en) * 2017-04-28 2018-11-29 株式会社コナミデジタルエンタテインメント Server device and computer program used for the same
CN110574382A (en) * 2017-04-28 2019-12-13 科乐美数码娱乐株式会社 Server device and computer program used in the server device
US11273372B2 (en) 2017-04-28 2022-03-15 Konami Digital Entertainment Co., Ltd. Server device and storage medium for use therewith
WO2020129115A1 (en) * 2018-12-17 2020-06-25 株式会社ソニー・インタラクティブエンタテインメント Information processing system, information processing method and computer program
US11831854B2 (en) 2018-12-17 2023-11-28 Sony Interactive Entertainment Inc. Information processing system, information processing method, and computer program

Also Published As

Publication number Publication date
JP2017041780A (en) 2017-02-23

Similar Documents

Publication Publication Date Title
WO2017029918A1 (en) System, method and program for displaying moving image with specific field of view
EP3396511B1 (en) Information processing device and operation reception method
JP6321150B2 (en) 3D gameplay sharing
TWI571130B (en) Volumetric video presentation
US20210103449A1 (en) Management framework for mixed reality devices
KR20190088545A (en) Systems, methods and media for displaying interactive augmented reality presentations
JP6470356B2 (en) Program and method executed by computer for providing virtual space, and information processing apparatus for executing the program
JP6932206B2 (en) Equipment and related methods for the presentation of spatial audio
JP6392945B1 (en) Program and method executed by computer for providing virtual space, and information processing apparatus for executing the program
JP6277329B2 (en) 3D advertisement space determination system, user terminal, and 3D advertisement space determination computer
US9294670B2 (en) Lenticular image capture
JP7249975B2 (en) Method and system for directing user attention to location-based gameplay companion applications
US20200175748A1 (en) Information processing device and image generation method
US20150213784A1 (en) Motion-based lenticular image display
JP2019191690A (en) Program, system, and method for providing virtual space
JP2020520576A5 (en)
JP2022537861A (en) AR scene content generation method, display method, device and storage medium
US20220005281A1 (en) Augmented reality (ar) imprinting methods and systems
JP6921789B2 (en) Programs and methods that are executed on the computer that provides the virtual space, and information processing devices that execute the programs.
JP2021089382A (en) Electronic apparatus, method for controlling electronic apparatus, program, and storage medium
JP2017041872A (en) System for displaying moving image in specific visual field, method and program
JP6974253B2 (en) A method for providing virtual space, a program for causing a computer to execute the method, and an information processing device for executing the program.
JP6952065B2 (en) Programs and methods that are executed on the computer that provides the virtual space, and information processing devices that execute the programs.
WO2021014974A1 (en) Program, system, information processing method, and information processing device
US10878244B2 (en) Visual indicator

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16836909

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16836909

Country of ref document: EP

Kind code of ref document: A1