WO2018190766A1 - Methods and nodes for providing multi perspective video of match events of interest - Google Patents

Methods and nodes for providing multi perspective video of match events of interest Download PDF

Info

Publication number
WO2018190766A1
WO2018190766A1 PCT/SE2018/050369 SE2018050369W WO2018190766A1 WO 2018190766 A1 WO2018190766 A1 WO 2018190766A1 SE 2018050369 W SE2018050369 W SE 2018050369W WO 2018190766 A1 WO2018190766 A1 WO 2018190766A1
Authority
WO
WIPO (PCT)
Prior art keywords
match
interest
event
relevant
video
Prior art date
Application number
PCT/SE2018/050369
Other languages
French (fr)
Inventor
Erik Åkerfeldt
Original Assignee
Znipe Esports AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Znipe Esports AB filed Critical Znipe Esports AB
Priority to KR1020197033055A priority Critical patent/KR20200006531A/en
Priority to US16/604,478 priority patent/US20200169793A1/en
Priority to EP18784981.5A priority patent/EP3610650A4/en
Priority to CN201880038453.XA priority patent/CN110741646A/en
Publication of WO2018190766A1 publication Critical patent/WO2018190766A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • A63F13/497Partially or entirely replaying previous game actions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/538Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for performing operations on behalf of the game client, e.g. rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/57Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player
    • A63F2300/577Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of game services offered to the player for watching a game played by other players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/63Methods for processing data by generating or executing the game program for controlling the execution of the game in time
    • A63F2300/634Methods for processing data by generating or executing the game program for controlling the execution of the game in time for replaying partially or entirely the game actions since the beginning of the game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Definitions

  • the present invention relates methods and nodes for providing highlights from events, especially from events such as esports tournaments.
  • An object of the present invention is to solve at least some of the problems outlined above. It is possible to achieve these objects and others by using devices and systems as defined in the attached independent claims.
  • the method comprises connecting to a game server, the game server comprising match metadata and a plurality of video streams of different perspectives of the esports game and rendering the plurality of video streams from the game server, wherein each software streaming client renders a different video stream.
  • the method further comprises analyzing the match metadata in order to find match events of interest in the esports match and compiling a list of match events of interest, and further assigning an interest value to each match event of interest in the complied list.
  • the method further comprises determining relevant video segments of the video streams for each match event of interest and providing the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest.
  • the determining step is based on the match metadata.
  • match metadata By having the determining step based on match metadata, a faster method maybe achieved which quickly identifies the relevant video segments.
  • each event of interest comprises timestamp data
  • each relevant video segment comprises timestamp data
  • the determining step is based on the timestamp data.
  • determining step may be based on both timestamp data and on match metadata.
  • the plurality of video streams and the corresponding rendered video streams also comprise timestamp data, and the step.
  • the determining and/or the providing and/or the bundling steps may further be based on such timestamp data.
  • the method further comprises bundling the relevant video segments before providing them to the
  • Such step may typically be based on timestamp data, such that the relevant video segments are provided in a bundle synchronized relative to a match time.
  • a streaming server for providing highlights of an esports match to a communications device, the server comprising a plurality of software streaming clients, an encoder for connecting to a game server comprising a plurality of video streams for a match, and for rendering the plurality of video streams on the plurality of software streaming clients and a processor (1010).
  • the processor is adapted for analyzing the match metadata in order to find match events of interest in the esports match, compiling a list of match events of interest and assigning an interest value to each match event of interest in the complied list.
  • the processor is further adapted for determining relevant video segments of the video streams for each match event of interest and providing the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest.
  • the streaming server further comprises a bundling unit adapted for bundling video segments related to a same match event of interest together.
  • Fig. 1 shows a schematic diagram of providing a highlight stream to a client device in a traditional architecture.
  • Fig. 2 shows a schematic diagram of providing a highlight stream to a client device in a multi perspective architecture.
  • FIG. 3 shows a flow chart of a method according to the present disclosure.
  • Fig. 4 shows a server according to the present disclosure. Description of embodiments
  • the present solution relates to a solution for providing a multiple perspective highlight stream to a client device, particularly a multi perspective highlight stream related to an esports match.
  • Examples of games which are suitable for such streaming include any game with more than one relevant perspective, such as e.g. Counter Strike, League of Legends, DotA 2 and Overwatch.
  • match metadata from a multiple perspective stream of an esports match is analyzed in order to detect events of interest, such as important and/or spectacular kills and deaths, decisive rounds and similar.
  • a list is compiled of all events of interest, and the events are ranked according to how important and/or interesting an event is deemed to be.
  • Relevant video segments are determined for each event of interest, based on a plurality of available perspectives. The relevant video segments relating to a same event of interest are then bundled together and the bundle is provided to a client device.
  • a multiple perspective stream of an esports game may contain one perspective from each player in the game and an event stream, wherein the event stream may be based on the different player perspectives or it may have a unique perspective of its own.
  • a viewer may typically choose which ones and/or how many of these perspectives that should be displayed simultaneously.
  • Streaming solutions comprising multiple different perspectives of a particular match are subject to the problem of how to present a stream of highlights, i.e. match events of interest, to a user, in an appealing and efficient way. For any particular event of interest, there are likely to be multiple perspectives that may be relevant and/or interesting for a viewer to see.
  • FIG. 1 a traditional solution for providing highlights to a user is shown.
  • a match for which highlights are to be provided is being played on a game server.
  • a streaming server connects to the game server in order to render the game on a different platform, from which a streaming client streams the video and audio of the game to a user.
  • the user is shown one perspective, which is typically the only available perspective.
  • events of interest are determined that are to be presented to a user on a client device. Since there is only one stream to choose from, the relevant video segments for each determined event of interest are easily located, and the highlights may be provided to a client device for viewing.
  • the streaming server now comprises a plurality of streaming clients, each of which will present a perspective out of a plurality of available perspectives, and that each of these perspectives of a match being played are rendered on the server and may be presented to the user.
  • This entails that, after a game has been finished and highlights are to be presented to a viewer, this solution has vastly more available video streams, i.e. available perspectives, to choose from when determining which highlights to present to a user, compared to traditional solutions.
  • perspectives relating to a same event Should highlights from each perspective view be presented sequentially, or should they be displayed simultaneously for each event of interest? Should all available views be presented for each event of interest such that the user has to keep track of which perspectives are relevant for each event of interest, or should only certain views be presented for each event? Should the video from each available perspective be analyzed in order to determine if the stream is relevant for each match event of interest, or should the events of interest be determined based on other factors? Should all the video be provided to the user who can then choose to play only the relevant segments, or should only the relevant segments be provided to the user?
  • the method in Fig. 3 is performed by a server, denoted streaming server, typically comprising a plurality of software streaming clients.
  • the server is in some embodiments a cloud server hosted in a cloud environment.
  • the streaming server connects to a game server.
  • the game server is a server hosting an ongoing match of a game, and/or a server hosting a game that has already been played.
  • An ongoing game hosted on a game server has a plurality of perspective views available, each
  • the streaming server gets access to the video for each perspective view available.
  • the game server comprises at least one perspective view for each player in the match.
  • a second step S1 10 the video from the game server is rendered onto the streaming server.
  • the game server will host the match in a format only accessible via a game platform, and not accessible from e.g. web browsers.
  • the streaming server renders the same video as is being shown in the game server, typically without modification.
  • the rendering step may comprise adding information such as a watermark in order to identify the company performing the rendering, and/or in order to make it more difficult for others to reproduce the stream.
  • the rendering step S1 10 is performed by use of a plurality of software streaming clients comprised in the streaming server. Each software streaming client connects to one of the plurality of available
  • perspective views such that one available view is rendered by one software streaming client.
  • the same perspective view is not rendered by two different software streaming clients, i.e. the streaming server comprises only one video per perspective view available.
  • match metadata is analyzed in order to determine match events of interest.
  • match events of interest entails events such as kills, winning rounds, and similar events that are relevant for the outcome of a match.
  • the method starts by analyzing the metadata of a match, which is a more efficient process than analyzing each individual video stream being hosted by the software streaming clients in order to look for match events of interest.
  • the match metadata is typically also located on the game server, and comprises measurable, typically quantifiable, information such as kills, deaths, round wins and other information relevant for the game being played.
  • the match metadata may be automatically generated, and in some embodiments the match metadata may be automatically generated by the game server.
  • the match metadata further comprises timestamp data, detailing when the events happen in relation to a match time.
  • each video stream of the plurality of video streams rendered in step 1 10 may also comprise timestamp data, detailing how the video stream relates to a match time.
  • an interest value is assigned to each identified match event of interest in a step S140.
  • the interest value is also based on match metadata. For example, a match event of interest comprising five kills is likely to be considered more interesting than a match event of interest comprising one kill. A match event of interest comprising one kill that immediately precedes the end of a round is likely to be more interesting than a match event of interest happening at a start of a round.
  • match metadata may be used to determine the interest value, and the types of match metadata used varies depending on which game is being played.
  • the interest value is assigned based on the amount of kills present in the match event of interest, such that an event of interest with a higher amount of kills has a higher interest value than an event of interest with a lower amount of kills. In some embodiments, the interest value is assigned such that an event of interest occurring earlier in a match is assigned a higher interest value than an event of interest occurring later in the match. In some embodiments, the interest value is assigned such that an event of interest that occurs later in a match is assigned a higher interest value than an event of interest occurring earlier in the match.
  • the method comprises a step S150 of determining relevant video segments for each match event of interest.
  • the step S150 is based on only the match metadata, in some embodiments including the timestamp data that may be included in the match metadata.
  • the determining step S150 may comprise choosing video segments starting a predetermined time before a match event of interest, and ending a predetermined time after said match event of interest. For example, in case of a kill being a match event of interest, and the kill occurs at 7:02 in match time, the determined relevant video segments could be from 6:42 to 7:12 in match time, for all relevant perspectives.
  • the players involved in an event are determined to represent the relevant perspectives.
  • the player performing a kill, the player being killed, and the players assisting in the kill may be determined as the relevant perspectives for that kill, wherein the kill represents a match event of interest.
  • the step S150 may comprise performing visual analysis on each of the plurality of video streams for each identified event of interest, in order to determine if the perspective view of a video stream is relevant to display for a specific identified match event of interest.
  • the analysis may also be based on timestamp data, which makes it possible to only analyze parts of the videos being rendered by the plurality of streaming clients, for each match event of interest. For example, if an event is identified as occurring at 13:43 in match time, it would be unnecessary to analyze the entire video for determining whether a certain perspective view is relevant for the determined match event of interest. Instead, it would be more beneficial to analyze a time interval close to that of the determined time, such as for example between 12:43 and 14:43.
  • the benefits from only analyzing relevant parts of the video are greater than in traditional solutions, considering that there are a plurality of available video streams to analyze, each comprising a different perspective view.
  • the visual analysis that in some embodiments is performed in step S150 may comprise visual analysis of each video stream for each determined match event of interest, in order to determine if the video stream comprises a relevant video segment for that match event of interest.
  • the perspective view of a player being involved in a kill is likely to be determined as a relevant perspective view.
  • the perspective view of one of that player's teammates may also provide a relevant perspective, even though this player was not directly involved in the kill.
  • the way this is determined is by visual analysis, typically by looking for certain pre-defined criteria, such as if the video representing a perspective view shows any of the persons being directly involved in the match event of interest.
  • the streaming server has information about all relevant video segments for each match event of interest, comprising data regarding which parts of each of the plurality of available video streams that are relevant for a particular match event of interest.
  • the method further comprises a step S160 of providing relevant video segments to a communications device, wherein the step S160 comprises providing at least two relevant video segments for at least one match event of interest.
  • the step S160 comprises providing at least two relevant video segments for at least one match event of interest.
  • the user is provided with video of at least two relevant perspective views, such that the user may view the perspective views simultaneously.
  • the maximum amount of perspective views being provided to a user for any particular match event of interest is the amount of perspective views available for the particular match being analyzed.
  • the relevant video segments are provided such that the video segments may be manipulated independently of each other, wherein manipulated denotes that they may be played, fast-forwarded, backed up, stopped, paused, etc.
  • the video segments may be provided such that they are synchronized relative to an in-game timer, such that if the video segments are played and started simultaneously they will reflect the same in- game time.
  • An in-game timer may be a timer which indicates for example a time elapsed from the start of a match or the start of a round, or a time remaining until the end of a round or until the end of a match.
  • the method may then also comprise a preceding step of analyzing visual time data in each of the relevant video segments, in order to determine a relationship between an in-game time of a video segment and time metadata related to the same video segment, and providing the video segments such that they are synchronized in relation to the in-game timer, wherein visual time data is visually identifiable data in a video stream or video segment indicative of a game time or in-game timer.
  • the communications device to which the video segments is provided may be any device capable of streaming video and audio, and is typically a device such as a smartphone, a tablet or a computer.
  • the providing step S160 is based on the assigned interest value, such that video segments related to a match event of interest with a high interest value are provided before video segments related to a match event of interest with a lower interest value.
  • all of the relevant video segments may be provided simultaneously, and in some embodiments, it is possible to provide the video segments with a lower interest value before the video segments with a higher interest value.
  • the method may also comprise an intermediate step of bundling the relevant video segments for each match event of interest.
  • the streaming server may package the relevant video segments into a bundle for each match event for interest.
  • this comprises creating a multi perspective video for each match event of interest, each multi perspective video comprising at least two relevant video segments.
  • a particular perspective view is relevant during a longer time period for a certain match event of interest than another perspective view. For example, consider that a kill takes place at 1 :00 in match time. The perspective of the person being killed may be relevant to watch from 00:45 up until 1 :00, whereas the perspective of the person performing the kill may be relevant to watch from 00:50 to 1 :10.
  • the server may then create a multi perspective video where the video from one perspective view starts and/or ends before the video from another perspective view.
  • such a multi perspective video comprises multiple different perspectives, wherein each video of a perspective view has a different duration, start time and stop time.
  • the bundling is typically based on timestamp data the match events of interest and of the relevant video segments related to the match event of interest.
  • the bundled video provided to the communications device are typically such that the viewer only needs to choose to play the entire bundle, and then the relevant video segments included in the bundle will start and stop depending on their individual start and stop times, in a synchronized manner such that the video segments streamed to the user show are shown with the same relation to a match time.
  • the relevant video segments may be provided to the communications device such that one bundle is provided for each match event of interest, each bundle comprising a multi perspective video as described above.
  • the streaming server 1000 may typically comprises a plurality of software streaming clients, and is typically hosted in a cloud environment.
  • the software streaming clients are adapted for connecting to a game server comprising a plurality of video streams of different perspectives of the esports match.
  • the streaming server 1000 comprises an encoder 1005 for rendering a plurality of video streams on the software streaming clients of the server.
  • the rendering comprises connecting to a game server comprising video streams for a match being played, or that has been played, and reproducing these video streams on the streaming server 1000.
  • the server further comprises a processor 1010, adapted for analyzing match metadata, compiling a list of match events of interest, compiling a list of match events of interest, assigning an interest value to each match event of interest, and for determining relevant video segments for each match event of interest.
  • the server 1000 further comprises a bundling unit 1015, adapted for bundling video segments related to a same match event of interest together.
  • the bundling unit 1015 is adapted for creating a multi perspective video based on a plurality of single perspective videos.
  • Such multi perspective videos may comprise video segments starting and stopping at different times. This entails that for some time durations the multi perspective video may only comprise a single video being displayed, however a multi perspective video according to the present disclosure will always comprise at least one time period wherein at least two video segments are being displayed simultaneously.
  • the display of the relevant video segments starts and stops is typically based on the timestamp data, related to the events of interest and/or related to the

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Disclosed is a method performed by a server for providing highlights of an esports game from a server to a communications device. The method comprises connecting (S100) to a game server, rendering (S110) a plurality of video streams from the game server, and analyzing (S120) match metadata in order to find match events of interest. The method further comprises compiling (S130) a list of the events of interest, assigning (S140) an interest value to each event, determining (S150) relevant video segments of the video streams for each event of interest, and providing (S160) the relevant video segments to a communications device.

Description

METHODS AND NODES FOR PROVIDING MULTI PERSPECTIVE VIDEO OF
MATCH EVENTS OF INTEREST
Technical field
[0001 ] The present invention relates methods and nodes for providing highlights from events, especially from events such as esports tournaments.
Background art
[0002] Live streaming of events, especially sport events, has been happening for decades, and is especially common in connection with large happenings, such as the World Cup in Football or the Olympic Games.
[0003] Live streaming of gaming events, such as esports tournaments, is relatively speaking much younger, since computer games and esports have only started to gain major traction the last decade or two.
[0004] As is often the case, new phenomena brings its own set of challenges, and such is the case also with streaming and providing coverage of esports and gaming tournaments. For example, in computer games there are multiple interesting viewpoints, as opposed to only one or a few which is common in regular sports.
[0005] Technologies are becoming available where it is possible for a user to view multiple viewpoints from an esports match simultaneously. However, this entails new problems to be solved.
[0006] Traditionally in sports, and in esports (even though it is a much younger phenomenon), it is relevant to present highlights/events of interest from a match after it has been played, or even during the time it is being played. Providing such highlights to a user in an efficient way has traditionally been fairly straightforward, since there has only been one source to choose highlights from. However, with new streaming technologies, particularly ones comprising multiple available viewing perspectives, how such a highlight stream should be provided to a user is not obvious. [0007] Summary of invention
[0008] An object of the present invention is to solve at least some of the problems outlined above. It is possible to achieve these objects and others by using devices and systems as defined in the attached independent claims.
[0009] According to a first aspect, there is provided a method performed by a streaming server of a communications network for providing of highlights of an esports game from a server to a communications device, the server comprising a plurality of software streaming clients. The method comprises connecting to a game server, the game server comprising match metadata and a plurality of video streams of different perspectives of the esports game and rendering the plurality of video streams from the game server, wherein each software streaming client renders a different video stream. The method further comprises analyzing the match metadata in order to find match events of interest in the esports match and compiling a list of match events of interest, and further assigning an interest value to each match event of interest in the complied list. The method further comprises determining relevant video segments of the video streams for each match event of interest and providing the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest.
[0010] By using a method according to the present disclosure, it becomes possible to provide video of match events of interest of streaming comprising multiple perspective views in an efficient and synchronized manner.
[001 1 ] According to an optional embodiment, the determining step is based on the match metadata. By having the determining step based on match metadata, a faster method maybe achieved which quickly identifies the relevant video segments. [0012] According to an optional embodiment, each event of interest comprises timestamp data, each relevant video segment comprises timestamp data, and the determining step is based on the timestamp data. The
determining step may be based on both timestamp data and on match metadata.
[0013] Further, in an optional embodiment the plurality of video streams and the corresponding rendered video streams also comprise timestamp data, and the step. The determining and/or the providing and/or the bundling steps may further be based on such timestamp data.
[0014] According to an optional embodiment, the method further comprises bundling the relevant video segments before providing them to the
communications device. Such step may typically be based on timestamp data, such that the relevant video segments are provided in a bundle synchronized relative to a match time.
[0015] According to a second aspect, there is provided a streaming server for providing highlights of an esports match to a communications device, the server comprising a plurality of software streaming clients, an encoder for connecting to a game server comprising a plurality of video streams for a match, and for rendering the plurality of video streams on the plurality of software streaming clients and a processor (1010). The processor is adapted for analyzing the match metadata in order to find match events of interest in the esports match, compiling a list of match events of interest and assigning an interest value to each match event of interest in the complied list. The processor is further adapted for determining relevant video segments of the video streams for each match event of interest and providing the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest. [0016] According to an optional embodiment, the streaming server further comprises a bundling unit adapted for bundling video segments related to a same match event of interest together.
[0017] There are further optional embodiments of the second aspect corresponding to the optional embodiments of the first aspect.
[0018] The aspects and embodiments described above are freely combinable with each other. There are optional embodiments of the second, third, fourth, fifth and sixth aspects that correspond to the optional embodiments of the first aspect.
Brief description of drawings
[0019] The solution will now be described more in detail, by way of example, with reference to the accompanying drawings, in which:
[0020] Fig. 1 shows a schematic diagram of providing a highlight stream to a client device in a traditional architecture.
[0021 ] Fig. 2 shows a schematic diagram of providing a highlight stream to a client device in a multi perspective architecture.
[0022] Fig. 3 shows a flow chart of a method according to the present disclosure.
[0023] Fig. 4 shows a server according to the present disclosure. Description of embodiments
[0024] In the following, a detailed description of the different embodiments of the solution is disclosed with reference to the accompanying drawings. All examples herein should be seen as part of the general description and are therefore possible to combine in any way in general terms. [0025] Shortly described, the present solution relates to a solution for providing a multiple perspective highlight stream to a client device, particularly a multi perspective highlight stream related to an esports match. Examples of games which are suitable for such streaming include any game with more than one relevant perspective, such as e.g. Counter Strike, League of Legends, DotA 2 and Overwatch.
[0026] In a solution according to the present disclosure, match metadata from a multiple perspective stream of an esports match is analyzed in order to detect events of interest, such as important and/or spectacular kills and deaths, decisive rounds and similar. A list is compiled of all events of interest, and the events are ranked according to how important and/or interesting an event is deemed to be. Relevant video segments are determined for each event of interest, based on a plurality of available perspectives. The relevant video segments relating to a same event of interest are then bundled together and the bundle is provided to a client device.
[0027] Today, solutions are becoming available for providing streams of esports matches where multiple perspectives are being displayed to a viewer simultaneously. For example, a multiple perspective stream of an esports game may contain one perspective from each player in the game and an event stream, wherein the event stream may be based on the different player perspectives or it may have a unique perspective of its own. In such solutions with multiple perspectives, a viewer may typically choose which ones and/or how many of these perspectives that should be displayed simultaneously.
[0028] These new solutions with multiple perspectives makes it possible to provide media to users in new ways, which entails new possibilities as well as new problems. Traditional solutions for streaming typically comprise one stream for a particular match or event, and consequently events of interest, i.e.
highlights, from a match only has one relevant source to consider. Thus, the problem of providing relevant video segments for a particular event of interest in such traditional streaming is straightforward, but it lacks variety and options to choose from.
[0029] Streaming solutions comprising multiple different perspectives of a particular match are subject to the problem of how to present a stream of highlights, i.e. match events of interest, to a user, in an appealing and efficient way. For any particular event of interest, there are likely to be multiple perspectives that may be relevant and/or interesting for a viewer to see.
[0030] For example, consider the case of an ongoing match of Counter Strike: Global Offensive. Suppose that there is an event in which one player in a team kills all five players of the other team. The perspective of the player doing all of the kills will definitely be considered a relevant viewpoint. The perspective of some, or all, of the players getting killed may also be relevant for a viewer. Furthermore, perspectives of some of the teammates of the player doing the kills may also be relevant, even if they didn't engage in the fight by themselves, since they may have been watching from an interesting viewpoint or similar.
[0031 ] As illustrated by this example, there may be multiple relevant views to present to a viewer from any particular event of interest. As will be understood, this may be applicable to any solution for streaming of a match that comprises several different perspectives that may be relevant to present to a viewer.
[0032] Looking now at Fig. 1 , a traditional solution for providing highlights to a user is shown. A match for which highlights are to be provided is being played on a game server. A streaming server connects to the game server in order to render the game on a different platform, from which a streaming client streams the video and audio of the game to a user. The user is shown one perspective, which is typically the only available perspective. After the game has been finished, or during the game, events of interest are determined that are to be presented to a user on a client device. Since there is only one stream to choose from, the relevant video segments for each determined event of interest are easily located, and the highlights may be provided to a client device for viewing.
[0033] Looking now at Fig. 2, a solution according to the present disclosure is presented schematically. As can be seen, key differences are that the streaming server now comprises a plurality of streaming clients, each of which will present a perspective out of a plurality of available perspectives, and that each of these perspectives of a match being played are rendered on the server and may be presented to the user. This entails that, after a game has been finished and highlights are to be presented to a viewer, this solution has vastly more available video streams, i.e. available perspectives, to choose from when determining which highlights to present to a user, compared to traditional solutions.
[0034] How to determine which highlights are relevant, and how to present these, is not a straightforward problem in the same way it is for the traditional solutions comprising only one stream of a given match. For example, some questions that arise include how to provide highlights from different
perspectives relating to a same event. Should highlights from each perspective view be presented sequentially, or should they be displayed simultaneously for each event of interest? Should all available views be presented for each event of interest such that the user has to keep track of which perspectives are relevant for each event of interest, or should only certain views be presented for each event? Should the video from each available perspective be analyzed in order to determine if the stream is relevant for each match event of interest, or should the events of interest be determined based on other factors? Should all the video be provided to the user who can then choose to play only the relevant segments, or should only the relevant segments be provided to the user?
[0035] As can be seen, there are several questions that arise when trying to solve the problem of how to provide a highlight stream from a multiple perspective stream to a user. The present disclosure provides a solution for achieving this.
[0036] Looking now at Fig. 3, the steps of a method according to the present disclosure will now be described. The method in Fig. 3 is performed by a server, denoted streaming server, typically comprising a plurality of software streaming clients. The server is in some embodiments a cloud server hosted in a cloud environment.
[0037] In a first step S1 10, the streaming server connects to a game server. The game server is a server hosting an ongoing match of a game, and/or a server hosting a game that has already been played. An ongoing game hosted on a game server has a plurality of perspective views available, each
perspective view represented by a video stream. By connecting to the game server, the streaming server gets access to the video for each perspective view available. Typically, the game server comprises at least one perspective view for each player in the match.
[0038] In a second step S1 10, the video from the game server is rendered onto the streaming server. Typically the game server will host the match in a format only accessible via a game platform, and not accessible from e.g. web browsers. By rendering the video onto the streaming server, it becomes more accessible to users. The streaming server renders the same video as is being shown in the game server, typically without modification. In some embodiments, the rendering step may comprise adding information such as a watermark in order to identify the company performing the rendering, and/or in order to make it more difficult for others to reproduce the stream.
[0039] In most embodiments, the rendering step S1 10 is performed by use of a plurality of software streaming clients comprised in the streaming server. Each software streaming client connects to one of the plurality of available
perspective views, such that one available view is rendered by one software streaming client. In most embodiments, the same perspective view is not rendered by two different software streaming clients, i.e. the streaming server comprises only one video per perspective view available.
[0040] In a third step S120, match metadata is analyzed in order to determine match events of interest. Typically, match events of interest entails events such as kills, winning rounds, and similar events that are relevant for the outcome of a match. The method starts by analyzing the metadata of a match, which is a more efficient process than analyzing each individual video stream being hosted by the software streaming clients in order to look for match events of interest. The match metadata is typically also located on the game server, and comprises measurable, typically quantifiable, information such as kills, deaths, round wins and other information relevant for the game being played. In some embodiments, the match metadata may be automatically generated, and in some embodiments the match metadata may be automatically generated by the game server. As will be understood, the metadata available will differ depending on the game that is being played. In some embodiments, the match metadata further comprises timestamp data, detailing when the events happen in relation to a match time. Further, each video stream of the plurality of video streams rendered in step 1 10 may also comprise timestamp data, detailing how the video stream relates to a match time.
[0041 ] After the match metadata has been analyze in order to identify match events of interest, a list of all relevant match events of interest is compiled in a step S130.
[0042] After the list of all relevant match events of interest has been compiled, an interest value is assigned to each identified match event of interest in a step S140. In some embodiments, the interest value is also based on match metadata. For example, a match event of interest comprising five kills is likely to be considered more interesting than a match event of interest comprising one kill. A match event of interest comprising one kill that immediately precedes the end of a round is likely to be more interesting than a match event of interest happening at a start of a round. Other types of match metadata may be used to determine the interest value, and the types of match metadata used varies depending on which game is being played. In some embodiments, the interest value is assigned based on the amount of kills present in the match event of interest, such that an event of interest with a higher amount of kills has a higher interest value than an event of interest with a lower amount of kills. In some embodiments, the interest value is assigned such that an event of interest occurring earlier in a match is assigned a higher interest value than an event of interest occurring later in the match. In some embodiments, the interest value is assigned such that an event of interest that occurs later in a match is assigned a higher interest value than an event of interest occurring earlier in the match.
[0043] After an interest value has been determined and assigned to each match event of interest, the method comprises a step S150 of determining relevant video segments for each match event of interest. In some
embodiments, the step S150 is based on only the match metadata, in some embodiments including the timestamp data that may be included in the match metadata. In such embodiments, the determining step S150 may comprise choosing video segments starting a predetermined time before a match event of interest, and ending a predetermined time after said match event of interest. For example, in case of a kill being a match event of interest, and the kill occurs at 7:02 in match time, the determined relevant video segments could be from 6:42 to 7:12 in match time, for all relevant perspectives.
[0044] Which perspectives are considered relevant may in some
embodiments also be based on only the match metadata in some
embodiments, such that the players involved in an event are determined to represent the relevant perspectives. For example, the player performing a kill, the player being killed, and the players assisting in the kill may be determined as the relevant perspectives for that kill, wherein the kill represents a match event of interest.
[0045] In some embodiments, the step S150 may comprise performing visual analysis on each of the plurality of video streams for each identified event of interest, in order to determine if the perspective view of a video stream is relevant to display for a specific identified match event of interest. The analysis may also be based on timestamp data, which makes it possible to only analyze parts of the videos being rendered by the plurality of streaming clients, for each match event of interest. For example, if an event is identified as occurring at 13:43 in match time, it would be unnecessary to analyze the entire video for determining whether a certain perspective view is relevant for the determined match event of interest. Instead, it would be more beneficial to analyze a time interval close to that of the determined time, such as for example between 12:43 and 14:43. The benefits from only analyzing relevant parts of the video are greater than in traditional solutions, considering that there are a plurality of available video streams to analyze, each comprising a different perspective view.
[0046] The visual analysis that in some embodiments is performed in step S150 may comprise visual analysis of each video stream for each determined match event of interest, in order to determine if the video stream comprises a relevant video segment for that match event of interest. For example, the perspective view of a player being involved in a kill is likely to be determined as a relevant perspective view. The perspective view of one of that player's teammates may also provide a relevant perspective, even though this player was not directly involved in the kill. The way this is determined is by visual analysis, typically by looking for certain pre-defined criteria, such as if the video representing a perspective view shows any of the persons being directly involved in the match event of interest. [0047] By the combination of analyzing match metadata and then determining relevant video segments based on the match events of interests obtained from analyzing the metadata, a more efficient method is achieved that is more likely to include all perspectives that may be of relevance to a viewer. As described above, certain perspective views, such as player perspective views, may provide relevant and interesting viewpoints for events wherein the player is not involved per se, which would be impossible to locate based only on the match metadata, since there is typically no information in the match metadata that provide information about things such as if an event is visible from a specific player's point of view.
[0048] After step S150 has been performed, the streaming server has information about all relevant video segments for each match event of interest, comprising data regarding which parts of each of the plurality of available video streams that are relevant for a particular match event of interest.
[0049] The method further comprises a step S160 of providing relevant video segments to a communications device, wherein the step S160 comprises providing at least two relevant video segments for at least one match event of interest. This entails that for each determined match event of interest, the user is provided with video of at least two relevant perspective views, such that the user may view the perspective views simultaneously. As will be understood, the maximum amount of perspective views being provided to a user for any particular match event of interest, is the amount of perspective views available for the particular match being analyzed. In some embodiments, the relevant video segments are provided such that the video segments may be manipulated independently of each other, wherein manipulated denotes that they may be played, fast-forwarded, backed up, stopped, paused, etc.
[0050] In some embodiments, the video segments may be provided such that they are synchronized relative to an in-game timer, such that if the video segments are played and started simultaneously they will reflect the same in- game time. An in-game timer may be a timer which indicates for example a time elapsed from the start of a match or the start of a round, or a time remaining until the end of a round or until the end of a match. The method may then also comprise a preceding step of analyzing visual time data in each of the relevant video segments, in order to determine a relationship between an in-game time of a video segment and time metadata related to the same video segment, and providing the video segments such that they are synchronized in relation to the in-game timer, wherein visual time data is visually identifiable data in a video stream or video segment indicative of a game time or in-game timer.
[0051 ] The communications device to which the video segments is provided may be any device capable of streaming video and audio, and is typically a device such as a smartphone, a tablet or a computer.
[0052] In some embodiments, the providing step S160 is based on the assigned interest value, such that video segments related to a match event of interest with a high interest value are provided before video segments related to a match event of interest with a lower interest value. In some embodiments, all of the relevant video segments may be provided simultaneously, and in some embodiments, it is possible to provide the video segments with a lower interest value before the video segments with a higher interest value.
[0053] In some embodiments, the method may also comprise an intermediate step of bundling the relevant video segments for each match event of interest. After the relevant video segments for each match event of interest has been determined in step S150, the streaming server may package the relevant video segments into a bundle for each match event for interest. In some
embodiments, this comprises creating a multi perspective video for each match event of interest, each multi perspective video comprising at least two relevant video segments. However, it may be such that a particular perspective view is relevant during a longer time period for a certain match event of interest than another perspective view. For example, consider that a kill takes place at 1 :00 in match time. The perspective of the person being killed may be relevant to watch from 00:45 up until 1 :00, whereas the perspective of the person performing the kill may be relevant to watch from 00:50 to 1 :10. The server may then create a multi perspective video where the video from one perspective view starts and/or ends before the video from another perspective view. As will be understood, such a multi perspective video comprises multiple different perspectives, wherein each video of a perspective view has a different duration, start time and stop time. In embodiments comprising the bundling step, the bundling is typically based on timestamp data the match events of interest and of the relevant video segments related to the match event of interest. The bundled video provided to the communications device are typically such that the viewer only needs to choose to play the entire bundle, and then the relevant video segments included in the bundle will start and stop depending on their individual start and stop times, in a synchronized manner such that the video segments streamed to the user show are shown with the same relation to a match time.
[0054] In embodiments comprising the bundling step, the relevant video segments may be provided to the communications device such that one bundle is provided for each match event of interest, each bundle comprising a multi perspective video as described above.
[0055] Looking now at Fig. 4, the functional architecture of a streaming server according to the present disclosure will now be described. As described previously, the streaming server 1000 may typically comprises a plurality of software streaming clients, and is typically hosted in a cloud environment. The software streaming clients are adapted for connecting to a game server comprising a plurality of video streams of different perspectives of the esports match.
[0056] The streaming server 1000 comprises an encoder 1005 for rendering a plurality of video streams on the software streaming clients of the server. Typically, the rendering comprises connecting to a game server comprising video streams for a match being played, or that has been played, and reproducing these video streams on the streaming server 1000.
[0057] The server further comprises a processor 1010, adapted for analyzing match metadata, compiling a list of match events of interest, compiling a list of match events of interest, assigning an interest value to each match event of interest, and for determining relevant video segments for each match event of interest.
[0058] The server 1000 further comprises a bundling unit 1015, adapted for bundling video segments related to a same match event of interest together. The bundling unit 1015 is adapted for creating a multi perspective video based on a plurality of single perspective videos. Such multi perspective videos may comprise video segments starting and stopping at different times. This entails that for some time durations the multi perspective video may only comprise a single video being displayed, however a multi perspective video according to the present disclosure will always comprise at least one time period wherein at least two video segments are being displayed simultaneously. When the display of the relevant video segments starts and stops is typically based on the timestamp data, related to the events of interest and/or related to the
determined relevant video segments and/or related to the plurality of video streams.
[0059] Although the description above contains a plurality of specificities, these should not be construed as limiting the scope of the concept described herein but as merely providing illustrations of some exemplifying embodiments of the described concept. It will be appreciated that the scope of the presently described concept fully encompasses other embodiments which may become obvious to those skilled in the_art, and that the scope of the presently described concept is accordingly not to be limited. Reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather "one or more". Moreover, it is not necessary for an apparatus or method to address each and every problem sought to be solved by the presently described concept, for it to be encompassed hereby.

Claims

1 . A method performed by a streaming server of a communications network for providing highlights of an esports match from a server to a communications device, the server comprising a plurality of software streaming clients, wherein the method comprises: connecting (S100) to a game server, the game server comprising match metadata and a plurality of video streams of different perspectives of the esports match; rendering (S1 10) the plurality of video streams from the game server, wherein each software streaming client renders a different video stream; analyzing (S120) the match metadata in order to find match events of interest in the esports match; compiling (S130) a list of match events of interest; assigning (S140) an interest value to each match event of interest in the complied list; determining (S150) relevant video segments of the video streams for each match event of interest; providing (S160) the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest.
2. The method according to claim 1 , wherein the determining step (S150) is based on the match metadata.
3. The method according to claim 1 or 2, wherein each event of interest comprises timestamp data, each relevant video segment comprises timestamp data, and wherein the determining step (S150) is based on the timestamp data.
4. The method according to any one of claims 1 -3, further comprising a step of bundling the relevant video segments before providing them to the
communications device.
5. A streaming server for providing highlights of an esports match to a communications device, the server comprising: a plurality of software streaming clients for connecting to a game server comprising a plurality of video streams for the esports match and match metadata; an encoder (1005) for rendering the plurality of video streams on the plurality of software streaming clients; a processor (1010), adapted for: analyzing the match metadata in order to find match events of interest in the esports match; compiling a list of match events of interest; assigning an interest value to each match event of interest in the complied list; determining relevant video segments of the video streams for each match event of interest; providing the relevant video segments to the communications device, wherein the providing comprises providing at least two relevant video segments for at least one match event of interest.
6. The streaming server according to claim 5, further comprising: a bundling unit (1015) adapted for bundling video segments related to a same match event of interest together.
7. The streaming server according to claim 5 or 6, wherein the determining is based on the match metadata.
8. The streaming server according to any one of claims 5-7, wherein each event of interest comprises timestamp data, each relevant video segment comprises timestamp data, and wherein the determining step is based on the timestamp data.
PCT/SE2018/050369 2017-04-11 2018-04-11 Methods and nodes for providing multi perspective video of match events of interest WO2018190766A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020197033055A KR20200006531A (en) 2017-04-11 2018-04-11 Nodes and methods for providing multi-view video of match events of interest
US16/604,478 US20200169793A1 (en) 2017-04-11 2018-04-11 Methods and nodes for providing multi perspective video of match events of interest
EP18784981.5A EP3610650A4 (en) 2017-04-11 2018-04-11 Methods and nodes for providing multi perspective video of match events of interest
CN201880038453.XA CN110741646A (en) 2017-04-11 2018-04-11 Method and node for providing multi-view video of game event of interest

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1750436A SE540666C2 (en) 2017-04-11 2017-04-11 Methods and nodes for providing multi perspective video of match events of interest
SE1750436-6 2017-04-11

Publications (1)

Publication Number Publication Date
WO2018190766A1 true WO2018190766A1 (en) 2018-10-18

Family

ID=63708841

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/050369 WO2018190766A1 (en) 2017-04-11 2018-04-11 Methods and nodes for providing multi perspective video of match events of interest

Country Status (6)

Country Link
US (1) US20200169793A1 (en)
EP (1) EP3610650A4 (en)
KR (1) KR20200006531A (en)
CN (1) CN110741646A (en)
SE (1) SE540666C2 (en)
WO (1) WO2018190766A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10897637B1 (en) * 2018-09-20 2021-01-19 Amazon Technologies, Inc. Synchronize and present multiple live content streams
US10863230B1 (en) 2018-09-21 2020-12-08 Amazon Technologies, Inc. Content stream overlay positioning
US11896909B2 (en) 2018-12-14 2024-02-13 Sony Interactive Entertainment LLC Experience-based peer recommendations
US10881962B2 (en) 2018-12-14 2021-01-05 Sony Interactive Entertainment LLC Media-activity binding and content blocking
US11213748B2 (en) 2019-11-01 2022-01-04 Sony Interactive Entertainment Inc. Content streaming with gameplay launch
CN111491214A (en) * 2020-04-09 2020-08-04 网易(杭州)网络有限公司 Live broadcast interaction method and system based on cloud game, electronic device and storage medium
US11442987B2 (en) * 2020-05-28 2022-09-13 Sony Interactive Entertainment Inc. Media-object binding for displaying real-time play data for live-streaming media
US11420130B2 (en) 2020-05-28 2022-08-23 Sony Interactive Entertainment Inc. Media-object binding for dynamic generation and displaying of play data associated with media
US11602687B2 (en) 2020-05-28 2023-03-14 Sony Interactive Entertainment Inc. Media-object binding for predicting performance in a media
US11368726B1 (en) * 2020-06-11 2022-06-21 Francisco Matías Saez Cerda Parsing and processing reconstruction of multi-angle videos
CN112492377A (en) * 2020-11-16 2021-03-12 Oppo(重庆)智能科技有限公司 Video recording method, device, equipment and storage medium
EP4271495A1 (en) * 2020-12-31 2023-11-08 Sony Interactive Entertainment Inc. Data display overlays for esport streams
US20230149819A1 (en) * 2021-11-17 2023-05-18 Nvidia Corporation Dynamically selecting from multiple streams for presentation by predicting events using artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015112543A1 (en) 2014-01-22 2015-07-30 Skillz Inc. Online software video capture and replay system
US20160294894A1 (en) * 2015-03-31 2016-10-06 Facebook, Inc. Multi-user media presentation system
WO2016160744A1 (en) * 2015-03-27 2016-10-06 Twitter, Inc. Live video streaming services
WO2016209310A1 (en) * 2015-06-23 2016-12-29 Facebook, Inc. Streaming media presentation system
WO2017004433A1 (en) * 2015-06-30 2017-01-05 Amazon Technologies, Inc. Integrating games systems with a spectating system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8805158B2 (en) * 2012-02-08 2014-08-12 Nokia Corporation Video viewing angle selection
US9233305B2 (en) * 2013-02-13 2016-01-12 Unity Technologies Finland Oy System and method for managing game-playing experiences
US9776075B2 (en) * 2013-03-15 2017-10-03 Electronic Arts Inc. Systems and methods for indicating events in game video
US20150121437A1 (en) * 2013-04-05 2015-04-30 Google Inc. Multi-perspective game broadcasting
US9646387B2 (en) * 2014-10-15 2017-05-09 Comcast Cable Communications, Llc Generation of event video frames for content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015112543A1 (en) 2014-01-22 2015-07-30 Skillz Inc. Online software video capture and replay system
WO2016160744A1 (en) * 2015-03-27 2016-10-06 Twitter, Inc. Live video streaming services
US20160294894A1 (en) * 2015-03-31 2016-10-06 Facebook, Inc. Multi-user media presentation system
WO2016209310A1 (en) * 2015-06-23 2016-12-29 Facebook, Inc. Streaming media presentation system
WO2017004433A1 (en) * 2015-06-30 2017-01-05 Amazon Technologies, Inc. Integrating games systems with a spectating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3610650A4

Also Published As

Publication number Publication date
SE1750436A1 (en) 2018-10-09
EP3610650A4 (en) 2020-04-22
US20200169793A1 (en) 2020-05-28
CN110741646A (en) 2020-01-31
EP3610650A1 (en) 2020-02-19
SE540666C2 (en) 2018-10-09
KR20200006531A (en) 2020-01-20

Similar Documents

Publication Publication Date Title
US20200169793A1 (en) Methods and nodes for providing multi perspective video of match events of interest
US11975261B2 (en) Online software video capture and replay system
JP7325568B2 (en) Video distribution system and video distribution method
JP7239596B2 (en) Scaled VR Engagement and Views at Esports Events
JP7184913B2 (en) Creating Winner Tournaments with Fandom Influence
JP7074680B2 (en) Synchronous model for virtual tournaments
Nascimento et al. Modeling and analyzing the video game live-streaming community
US8636589B2 (en) Systems and methods that enable a spectator's experience for online active games
US9233299B2 (en) Cloud-based multi-player gameplay video rendering and encoding
CA3017745A1 (en) Across-match analytics in peer-to-peer gaming tournaments
EP3807766A1 (en) Shadow tracking of real-time interactive simulations for complex system analysis
JP7419554B2 (en) Surfacing pre-recorded gameplay videos for in-game player assistance
US20220219090A1 (en) DYNAMIC AND CUSTOMIZED ACCESS TIERS FOR CUSTOMIZED eSPORTS STREAMS
JP6032704B2 (en) Game system, privilege providing system, and program
JP6032705B2 (en) GAME SYSTEM, DISPLAY CONTROL SYSTEM, AND PROGRAM
CN109479156B (en) Method, apparatus and system for synchronized streaming of first and second data streams
Centieiro et al. In sync with fair play! delivering a synchronized and cheat-preventing second screen gaming experience
JP7100277B2 (en) Data processing system and data processing method
JP5939217B2 (en) Information processing apparatus, information processing system, and program
KARLSSON et al. Extending Sports Broadcasts: Designing a Second Screen Interface for Live Sports Broadcasts
GB2524875A (en) System and method for providing enhanced walkthroughs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18784981

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20197033055

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018784981

Country of ref document: EP

Effective date: 20191111

ENP Entry into the national phase

Ref document number: 2018784981

Country of ref document: EP

Effective date: 20191111