GB2579659A - Player interaction capturing system and method - Google Patents

Player interaction capturing system and method Download PDF

Info

Publication number
GB2579659A
GB2579659A GB1820134.3A GB201820134A GB2579659A GB 2579659 A GB2579659 A GB 2579659A GB 201820134 A GB201820134 A GB 201820134A GB 2579659 A GB2579659 A GB 2579659A
Authority
GB
United Kingdom
Prior art keywords
event
interest
player
video
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1820134.3A
Other versions
GB201820134D0 (en
Inventor
Szekely Adam
Ashton Max
Michailidis Lazaros
Weaver Tom
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Original Assignee
Sony Interactive Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Interactive Entertainment Inc filed Critical Sony Interactive Entertainment Inc
Priority to GB1820134.3A priority Critical patent/GB2579659A/en
Publication of GB201820134D0 publication Critical patent/GB201820134D0/en
Publication of GB2579659A publication Critical patent/GB2579659A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/49Saving the game status; Pausing or ending the game
    • A63F13/497Partially or entirely replaying previous game actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/215Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • A63F13/5375Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/798Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Optics & Photonics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A system and method for capturing player interactions with a video game to replay them. The occurrence of an event of interest is detected in a video game 100. The video of the event of interest is recorded 110, and player inputs, such as controller interactions, are also recorded 120. Content is generated comprising both the recorded video and the recorded player input 150 and both are presented during playback of the content to the viewer. A system is also provided. The content may be stored and distributed to a number of users in response to a request from those users. Game state information may be recorded 130 and analysed by a machine learning program. The event is preferably detected using a machine learning process, wherein the machine learning program is detects events of interest based on detected indicators in the player’s gameplay, such as combinations of player inputs, based on the player’s or spectator’s reactions to gameplay, or game state information.

Description

PLAYER INTERACTION CAPTURING SYSTEM AND METHOD
This disclosure relates to a player interaction capturing system and method.
Gaming has become an increasingly widespread hobby, and in some cases a part-or full-time profession, for a number of players around the world. In this same time period, it has become increasingly desirable and possible to create and share content relating to games with a wide audience. For instance, players may livestream their gameplay or upload videos of their gaming highlights to a server -this enables a user's gameplay to be seen by potentially millions of other players.
One result of this is in that players are able to share highlights of their gameplay -impressive feats that are performed by a player may be viewed by any number of other players, each of whom may wish to emulate the successes of the first player. While the first player may provide a voiceover with the highlights to explain their action, or the actions may be derivable upon viewing the content, this places a significant burden upon the viewer to interpret and reproduce the actions. In addition to this, the content creator is tasked with revisiting their actions to attempt to determine the order of events and identify which buttons they actually pressed -a task that is not always simple as the complexity of games or the speed at which they are played increases.
In addition to the problems of identifying the sequence of actions in a video highlight, there also exists a problem of identifying events of interest within the gameplay; that is, it may be difficult to identify which events within the gameplay would constitute a highlight that would be of interest to viewers.
The term 'events of interest' refers to any type of in-game occurrence that may be of interest to one or more players. For example, executing an impressive combo or posting a new record time may be sufficiently noteworthy achievements so as to qualify. While such a problem may be alleviated for users that record all their gameplay, there is still a burden upon a player in seeking out that content. Further to this, a player may not actually be aware as to what qualifies as an event of interest -a player that is new to the game may not fully appreciate how difficult particular manoeuvres are to execute, for example.
By identifying these events automatically, it becomes possible for only video clips relating to these events to be stored, rather than storing a video of a whole game session, reducing the burden on processors and storage media in generating and storing this content. Existing methods for detecting events of interest within a game rely largely upon predefined thresholds for particular measures of success. For example, a user may receive a positive score if they beat an opponent while sustaining a below-threshold amount of damage, or if they are able to put together a combo of an above-threshold number of hits. Such measures are lacking any many respects, however, as they are burdensome upon a game creator to define and often lack the context to provide any truly reliable indication of events of interest.
It is in the context of the above problems that the present invention arises.
This disclosure is defined by claim 1.
Further respective aspects and features of the disclosure are defined in the appended claims.
Embodiments of the present invention will now be described by way of example with reference to the accompanying drawings, in which: Figure 1 schematically illustrates a method for capturing one or more player interactions with a video game; Figure 2 schematically illustrates the display of generated content relating to one or more player interactions with the video game; Figure 3 schematically illustrates a machine learning based process; Figure 4 schematically illustrates a system for capturing and distributing one or more player interactions within a video game; Figure 5 schematically illustrates a recording unit; Figure 6 schematically illustrates a generation unit; and Figure 7 schematically illustrates a content management unit.
The method for capturing one or more player interactions with a video game, as shown in Figure 1, comprises a number of steps which will be briefly outlined here, before being discussed in more detail below. Of course, the order of the steps shown may be varied as appropriate for a given implementation, and some steps may be omitted entirely in some embodiments. For example, the analysis of the event of interest and/or recording of game state information may not be performed in some embodiments.
A step 100 comprises detecting the occurrence of an event of interest in the video game.
This may be performed using a machine learning process, for example, and may be performed in real-time (or approximately real-time) or upon captured video of gameplay.
A step 110 comprises recording video of the event of interest, or information that enables a video to be reconstructed by a viewer of the content.
A step 120 comprises recording player inputs relating to the event of interest.
A step 130 comprises recording game state information corresponding to at least the event of interest.
A step 140 comprises analysing the event of interest, for example analysing video game state and/or one or more users associated with the video game session. This may be performed using a machine learning process, for example.
A step 150 comprises generating content comprising the recorded video and the recorded player inputs, such that during playback a viewer is presented with both the recorded video and the recorded player inputs. An example of the display of this generated content is shown in Figure 2.
Figure 2 schematically illustrates the display of generated content relating to one or more player interactions with the video game, the display comprising video content and input information relating to the displayed video content. This content may be generated in accordance with the method of Figure 1.
The screen 200 (for example, a television) is configured to display the generated content. The generated content comprises gameplay video 210 and player inputs 220 that are associated with the event of interest that is shown in the video 210. The gameplay video 210 may comprise a player-controlled character 230 and one or more opponents 240, but of course this may vary dependent upon the game.
The player inputs 220 may be presented in any number of suitable formats for indicating to a user a sequence of inputs that should be provided to replicate the event shown in the video 210. The example shown in Figure 2 is that of a sequence of labelled button presses that could be used (X, X, ^, 0) to perform the actions of the player-controlled character 240 as shown in the video 210. The displayed player inputs 220 may be different to those used in the original event of interest if players use different controller mappings (such that different buttons have different functions), and as such inputs that are relevant to the viewer should be provided if there is a difference between the controller mappings. Alternatively, or in addition, semantic labels for the inputs may be provided, such as 'jump', 'jump', 'quick attack', 'dodge' being displayed with or in place of the X, X, ^, 0 sequence that is shown in Figure 2.
Returning to the discussion of Figure 1, the detection of the one or more events of interest during the playing of a videogame, as performed in step 100, may be implemented in a number of ways. For example, real-time detection methods may be implemented using the output video data or actual game state data. Alternatively, or in addition, analysis may be performed later (such as post-match, or after a user has finished playing altogether) using either captured video data or game state data.
In some embodiments, user reactions may also be considered when identifying events of interest during gameplay. For example, it may be possible to identify one or more events of interest in dependence upon detected indicators in the player's, or one or more spectators', reactions to the gameplay. These reactions may be monitored using captured audio or video of player's/spectators, for example, or from detected motion of a controller that could be indicative of a particular reaction. In the case that a user is livestreaming their gameplay to one or more online viewers, reactions (such as text-based messages) in a chat room may also be analysed.
Such analysis may be performed on a per-user basis or per-game basis, rather than a general approach that is applied to all users uniformly, due to different players reacting differently to similar events or different reactions being associated with different games/genres.
For example, a game played at high-tempo or with a reliance on fast reflexes will likely draw more animated reactions than a slower, more strategic game even if the magnitude of the feat is similar.
The recording of video of the detected one or more events of interest, as performed in step 110 of Figure 1, may comprise the storing of images, video, audio, and/or game data. It is not necessary that video content is directly recorded in the step 110, as it may be beneficial (for example, in reducing a burden upon storage means) to instead record a number of parameters or other information that enables a video of the gameplay to be generated at a later time.
For example, information may be stored that describes an initial game state at the start of an event recording period, the user's inputs, results of random events (such as critical hits, in a fighting game, where the result is necessary to reproduce the outcome) and information about the movement of in-game opponents. The latter of these may be represented by a set of inputs that represents the motion of the computer-controlled opponents, even though no real inputs were actually provided. From this information, it is possible to reconstruct a sequence of events within a game.
The use of game data may also be advantageous in that it enables the user (or the recording system, as the process may be automated) to recreate the game environment and to record their own video of the action -for example, it may be more suitable to use alternative viewpoints to that of the player's (such as viewing the content in a third-person view rather than a first-person view) in order to fully appreciate the action shown in the video. A similar effect could be provided by recording video content from multiple viewpoints within the content and enabling a viewpoint to be selected (or a new viewpoint to be generated, if sufficient information has been captured) during playback.
The recording of player inputs relating to the detected one or more events of interest, as performed in step 120 of Figure 1, may comprise the storing of button press information, semantic input information, and/or other inputs (such as gestures or voice inputs). In some cases, it may be possible to identify button presses based upon the video that is captured, and as such information identifying the game and/or the actions performed in the video may be sufficient. The requirement when recording player inputs is in that information may be generated that enables a viewer to recreate the action seen in the video.
The recording of game state information corresponding to the detected one or more events of interest, as performed in step 130 of Figure 1, may comprise a recording of one or more variables that are able to be used to define the game state at that time. For example this may include one or both of large-scale variables such as in-game location and/or time of day, and smaller-scale variables such as the number of enemies or the equipment selected by a player's avatar.
In some embodiments, it may be possible to identify superfluous game state information that need not be recorded. For example, cosmetic factors (such as the eye colour or hairstyle selected for an avatar) may be omitted from recording, as can any information that may be obtained elsewhere (such as from game files). Alternatively, or in addition, it may be unnecessary to record different features of enemies (such as equipped items or appearance) if this is at least partially standardised within the game -each enemy may simply be identified by one or a small number identifiers (such as 'enemy #001') rather than by detailed information. In those embodiments, the omitted information may still be needed for playback. For example, a random or default value could be applied in place of the omitted values, or information could be supplied by a user that fills in the gaps. Alternatively, or in addition, cosmetic features of the original player's avatar in the video could be replaced with the viewer's own avatar as this information may be locally available.
The analysing of the event of interest, as performed in step 140 of Figure 1, may include the analysis of the video game state and/or one or more users associated with the video game session. For example, indicators that have been identified as relating to events of interest may also be used to categorise or otherwise evaluate an event of interest. Alternatively, or in addition, an analysis of player or spectator reactions may be analysed for a similar purpose. One manner in which such an analysis may be used is in the identification of which elements should be included in the generation of content as performed in step 150 of Figure 1.
For example, an analysis of the event of interest may be able to better identify the start and/or end point of the event, as well as any superfluous button presses by the player (that is, those button presses that did not contribute to the event of interest in a meaningful way -an example of this is a user's pausing of the game during the event) that need not be included in the generated content.
The analysis of the event of interest (and/or the generated content) may further comprise a categorisation or rating of the content. For example, the type of event may be identified (such as 'high damage combo', 'perfect driving', 'great dodging', or 'fast win') and recorded as a part of the content to enable the content to be found and/or reviewed more easily. A rating may also be assigned that indicates either the impressiveness or the rarity of the event within the gaming community, for example.
For instance, a five star rating could be applied when the event has been performed by only 1% of players, through to a one star rating if the event has been performed by 50% of players. Alternatively, the rating could be based upon a deviation from the average values for the content -for example, a playthrough that takes only half the average time could be considered to have a high rating, while a playthrough that only marginally beats the average time could be considered to have a lower rating.
The generating of content comprising the recorded video and the recorded player inputs (such that during playback of the content a viewer is presented with both the recorded video and the recorded player inputs), as performed in step 150 of Figure 1, may comprise the generation of video content (and/or information enabling the playback of the recorded event, such as initial game state data and inputs/in-game actions describing the event).
Once generated, the content may be saved to a storage medium and/or uploaded to a server for distribution to viewers. In some examples, the content may be uploaded to a video hosting website that specialises in gameplay highlights, a user's profile on a gaming network, and/or a user's personal website.
In some embodiments, the content is provided using a standard video format with button presses overlaid (or otherwise provided) in a hard-coded fashion to form a single file that may be easy to distribute. Alternatively, or in addition, the video and inputs could be recorded as separate files (or as a video and associated metadata, for example) and a playback device may be operable to generate a suitable display using the information provided in the content -for instance, by providing video playback in a frame while the inputs are provided elsewhere on the display. This may aid the flexibility of the content in terms of the playback formats and/or customisation by the receiving device (for example, in translating the inputs to the user's own controller mapping).
A number of the above steps may be implemented using machine learning or artificial intelligence based methods. For example, the steps of event of interest detection and/or analysis may be implemented using such methods. Such methods may be more appropriate than traditional methods for identifying events (such as relying on predefined thresholds being met, or the occurrence of trigger events), as they may be used to generate a deeper analysis of in-game events that is better able to identify when notable events have taken place. For example, these methods may be able to better understand the context in which a player's interaction occurs and it is often the context that separates an event of interest from an event of non-interest.
To provide examples of how an Al system may be implemented to achieve the above effects, a number of examples are provided. Of course, these should not be taken as limiting; any suitable method may be used.
For example, Generative Adversarial Networks (GANs) may be used to train a system that is able to identify events of interest. The target in such a network may be the identification of events of interest, and this may be achieved by selecting (or generating) videos of in-game events (the generated input). In-game events that would be of interest may be identified from a training data set (as discussed above), and a discriminator may be operable to distinguish between training data and generated inputs based upon whether events of interest are encountered. Examples of useful training data include videos of gameplay comprising events of interest (such as speedruns of games, which may include impressive feats in progressing quickly, or highlight videos posted by top users) or saved game data (such as a replay file that may be opened in game) that includes events of interest; in either case, the GAN should be directed to identify the identifiers of an event being of interest as shown in the training data.
In another example, the GAN may be trained using a training set that shows only normal' gameplay; that is, gameplay in which nothing particularly noteworthy happens. By analysing the patterns found in normal gameplay, the GAN may be configured to identify events of interest by their lack of similarity to the training set. Such a system may also be used to identify bugs or glitches in the gameplay, which may also be considered to be events of interest when exploited by a player.
In this manner, it is possible to train a GAN to recognise identifiers of events of interest and therefore identify events of interest either from recorded videos of gameplay or from live gameplay.
Supervised learning techniques could also be utilised for this purpose. For example, a pair of neural networks could be used in combination to identify events of interest and identify behaviour or contextual elements that indicate potential events of interest. For example, a first neural network may be trained to identify events of interest -the training may take any form described above; in some examples video content may be pre-labelled as comprising one or more events of interest. A second neural network may then be used to identify the behaviours and context that give rise to the events of interest.
Deep reinforcement learning provides a further option for developing a system that is effective at identifying events of interest in content. Such methods rely on providing a 'reward' to a system that is analysing content to identify events of interest; of course this does not need to be a reward in the traditional sense, rather it is an acknowledgement that the actions of the system have led to a positive outcome (that is, the identification of an event of interest).
Figure 3 schematically illustrates an example of a machine learning based process that may be implemented in embodiments of the present disclosure.
In a step 300, a set of training data is provided to the machine learning system. This data may comprise any suitable information for identifying events of interest, such as examples of typical and/or exceptional gameplay, parameters describing normal gameplay, or expected user reactions to events of interest.
In a step 310, the training data is analysed by the machine learning system to identify patterns or correlations that can be used to identify events of interest in other gameplay examples. For example, this may comprise the identification of elements that combine to make an event interesting (such as a high amount of damage dealt to enemies alongside a low amount of damage taken from those enemies) or a correlation between a particular parameter and a level of interest in an event (for example, a measure of how the interest in an event grows with an increasing combo multiplier).
Any suitable patterns or correlations may be identified; as noted above, machine learning algorithms or artificial neural networks may be particularly suitable for identifying such patterns or correlations as they may be capable of identifying patterns that a user is not able to easily identify. These patterns and correlations may form the basis of an event detection algorithm to be utilised by the machine learning system.
In a step 320, the machine learning system attempts to identify events of interest from a second set of data. For example, content such as gameplay videos or other records of events may be provided to the system, which then applies an event detection algorithm to the content.
The event detection here may comprise a binary 'yes' or 'no' determination for the provided content, or it may be more sophisticated in that a time of the event is determined. Alternatively, or in addition, a description of the event may be generated to assist with characterising the event.
In a step 330, feedback is provided that is used to refine the machine learning process, for example by identifying false positives/negatives or by providing further training data. For example a human operator may be used to confirm whether the event detection was correct, although any other suitable method may be used (such as marking up the content before providing it to the system). This feedback may be used to correct or otherwise modify identified correlations or patterns that are used by the machine learning process to identify events.
For example, if a false positive is identified then it is implied that a pattern that has been identified as indicating an event of interest has not been correctly characterised. This may be because there is an additional indicator that must be considered in conjunction with the identifiers for the pattern (such as relative strengths of the player and the enemies, in addition to the example of indicators being damage dealt/damage taken). The pattern may then be revised to include these additional factors. Alternatively, or in addition, the strength of the correlation between indicators and a level of interest may need to be adjusted -for example, if a combination of indicators is expected to be of great interest but in fact is only of mild interest then the strength of correlation should be adjusted.
While the above description relates to an arrangement in which a single event recording/content generation application is used to identify events of interest, in some embodiments it is envisaged that such an application may be personalised or specialised in one or more ways. For example, a new application may be generated for each game, game genre, or any other grouping of games; this may assist with providing a more accurate or reliable event detection in some embodiments. Alternatively, or in addition, an application may be provided that is operated on a per-user or per-group basis (for example, a player may identify themselves as belonging to a particular category of gamer such as chardcore', casual', 'stealthy', or 'show-off'); this may be advantageous in that more relevant events may be detected for each user based upon their own interests or benchmarks for events to qualify as notable.
Figure 4 schematically illustrates a system for capturing and distributing one or more player interactions within a video game. The system comprises a recording unit 400, a generation unit 410, a content management unit 420 and a server 430.
The recording unit 400 is operable to detect the occurrence of events of interest and to record information, such as video and player inputs, related to detected events of interest.
The generation unit 410 is operable to generate content, and in some embodiments to categorise or otherwise analyse the content and/or event.
The content management unit 420 is operable to store and/or distribute the generated content.
The server 430 is operable to store and/or distribute the generated content. In some embodiments, the server 430 is operated by a video hosting website, for example.
Of course, the locations and functions of these functional blocks are entirely exemplary; the distribution of functions may be split between any number of local and/or remote processing and storage devices. For example, if a game is being played using a cloud-based gaming service, the entire processing may be performed at the server 430 rather than at a local processing device.
Figure 5 schematically illustrates a recording unit 400. The recording unit 410 comprises an event detection unit 500, a video recording unit 510, a player input recording unit 520, and a game state capture unit 530.
The event detection unit 500 is operable to detect the occurrence of an event of interest in the video game. In some embodiments the event detection unit 500 may be operable to implement a machine learning process for detecting the occurrence of an event of interest, as discussed above.
In such embodiments, the event detection unit 500 may be operable to identify one or more events of interest in dependence upon detected indicators in the player's gameplay, wherein indicators relating to events of interest are determined based upon the machine learning process' analysis of gameplay.
Alternatively, or in addition, the event detection unit 500 may be operable to identify one or more events of interest in dependence upon detected indicators in the player's, or one or more spectators', reactions to the gameplay, wherein indicators relating to events of interest are determined based upon the machine learning process' analysis of a particular player's or spectator's reactions to gameplay. Examples of indicators that may be detected include indicators relating to inputs, captured audio, or captured video.
The video recording unit 510 is operable to record video of the event of interest.
The player input recording unit 520 is operable to record player inputs relating to the event of interest. In some embodiments the player input recording unit 520 is operable to record inputs semantically or with reference to a default key binding for the game.
The game state capture unit 530 is operable to record game state information corresponding to at least the event of interest. This game state information may be sufficient to enable the reconstruction of a game state by a device that receives the game state information, for example.
Figure 6 schematically illustrates a generation unit 410. The generation unit 410 comprises an event analysis unit 600 and a content generation unit 610.
The event analysis unit 600 is operable to analyse the video game state and/or one or more users associated with the video game session, in some embodiments using a machine learning process. In some embodiments, the event analysis unit 600 is operable to identify the beginning of the event of interest in dependence upon game state information and/or inputs provided by the player during the event of interest that did not influence the event. In some embodiments the event analysis unit 600 is operable to apply a rating and/or categorise an event of interest.
The content generation unit 610 is operable to generate content comprising the recorded video and the recorded player inputs, such that during playback of the content a viewer is presented with both the recorded video and the recorded player inputs. In some embodiments, the content generation unit 610 is operable to generate content that also comprises the recorded game state information.
Figure 7 schematically illustrates a content management unit 420. The content management unit 420 comprises a content storage unit 700 and a content distribution unit 710. The content storage unit 700 is operable to store the generated content that is output by the content generation unit 610.
The content distribution unit 710 is operable to distribute the stored content to one or more users in response to a request from the one or more users. Alternatively, or in addition, the content distribution unit 710 may be operable to upload the content to the server 430.
The techniques described above may be implemented in hardware, software or combinations of the two. In the case that a software-controlled data processing apparatus is employed to implement one or more features of the embodiments, it will be appreciated that such software, and a storage or transmission medium such as a non-transitory machine-readable storage medium by which such software is provided, are also considered as embodiments of the disclosure.

Claims (15)

  1. CLAIMS1. A system for capturing one or more player interactions with a video game, the system comprising: an event detection unit operable to detect the occurrence of an event of interest in the video game; a video recording unit operable to record video of the event of interest; a player input recording unit operable to record player inputs relating to the event of interest; and a content generation unit operable to generate content comprising the recorded video and the recorded player inputs, such that during playback of the content a viewer is presented with both the recorded video and the recorded player inputs.
  2. 2. A system according to claim 1, comprising: a content storage unit operable to store the generated content; and a content distribution unit operable to distribute the stored content to one or more users in response to a request from the one or more users.
  3. 3. A system according to claim 1, comprising a game state capture unit operable to record game state information corresponding to at least the event of interest, wherein the content generation unit is operable to generate content that also comprises the recorded game state information.
  4. 4. A system according to claim 3, wherein the game state information is sufficient to enable the reconstruction of a game state by a device that receives the game state information.
  5. 5. A system according to claim 1, wherein the player input recording unit is operable to record inputs semantically or with reference to a default key binding for the game. 30
  6. 6. A system according to claim 1, comprising an event analysis unit operable to analyse the video game state and/or one or more users associated with the video game session using a machine learning process, wherein the event detection unit is also operable to implement a machine learning process for detecting the occurrence of an event of interest.
  7. 7. A system according to claim 6, wherein the event detection unit is operable to identify one or more events of interest in dependence upon detected indicators in the player's gameplay, wherein indicators relating to events of interest are determined based upon the machine learning process' analysis of gameplay.
  8. 8. A system according to claim 6, wherein the event detection unit is operable to identify one or more events of interest in dependence upon detected indicators in the player's, or one or more spectators', reactions to the gameplay, wherein indicators relating to events of interest are determined based upon the machine learning process' analysis of a particular player's or spectator's reactions to gameplay.
  9. 9. A system according to claim 8, wherein indicators relating to inputs, captured audio, or captured video may be detected.
  10. 10. A system according to claim 6, wherein the event analysis unit is operable to identify the beginning of the event of interest in dependence upon game state information.
  11. 11. A system according to claim 6, wherein the event analysis unit is operable to identify inputs provided by the player during the event of interest that did not influence the event.
  12. 12. A system according to claim 6, wherein the event analysis unit is operable to apply a rating and/or categorise an event of interest.
  13. 13. A method for capturing one or more player interactions with a video game, the method comprising: detecting the occurrence of an event of interest in the video game; recording video of the event of interest; recording player inputs relating to the event of interest; and generating content comprising the recorded video and the recorded player inputs, such that during playback of the content a viewer is presented with both the recorded video and the recorded player inputs.
  14. 14. Computer software which, when executed by a computer, causes the computer to carry out the method of claim 13.
  15. 15. A non-transitory machine-readable storage medium which stores computer software according to claim 14.
GB1820134.3A 2018-12-11 2018-12-11 Player interaction capturing system and method Withdrawn GB2579659A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1820134.3A GB2579659A (en) 2018-12-11 2018-12-11 Player interaction capturing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1820134.3A GB2579659A (en) 2018-12-11 2018-12-11 Player interaction capturing system and method

Publications (2)

Publication Number Publication Date
GB201820134D0 GB201820134D0 (en) 2019-01-23
GB2579659A true GB2579659A (en) 2020-07-01

Family

ID=65030137

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1820134.3A Withdrawn GB2579659A (en) 2018-12-11 2018-12-11 Player interaction capturing system and method

Country Status (1)

Country Link
GB (1) GB2579659A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022146709A1 (en) * 2020-12-30 2022-07-07 Sony Interactive Entertainment Inc. Recommending game streams for spectating based on recognized or predicted gaming activity

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113457130B (en) * 2021-07-07 2024-02-02 网易(杭州)网络有限公司 Game content playback method and device, readable storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294089A1 (en) * 2006-06-02 2007-12-20 Garbow Zachary A Gameplay Recording and Marketplace
US8485899B1 (en) * 2012-03-06 2013-07-16 Steelseries Aps Method and apparatus for presenting performances of gamers
US20150231510A1 (en) * 2014-02-17 2015-08-20 Robert Hain System and method for providing enhanced walkthroughs
US20150251093A1 (en) * 2014-03-04 2015-09-10 Microsoft Technology Licensing, Llc Recording companion
WO2017066029A1 (en) * 2015-10-16 2017-04-20 Microsoft Technology Licensing, Llc Automated generation of game event recordings

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294089A1 (en) * 2006-06-02 2007-12-20 Garbow Zachary A Gameplay Recording and Marketplace
US8485899B1 (en) * 2012-03-06 2013-07-16 Steelseries Aps Method and apparatus for presenting performances of gamers
US20150231510A1 (en) * 2014-02-17 2015-08-20 Robert Hain System and method for providing enhanced walkthroughs
US20150251093A1 (en) * 2014-03-04 2015-09-10 Microsoft Technology Licensing, Llc Recording companion
WO2017066029A1 (en) * 2015-10-16 2017-04-20 Microsoft Technology Licensing, Llc Automated generation of game event recordings

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022146709A1 (en) * 2020-12-30 2022-07-07 Sony Interactive Entertainment Inc. Recommending game streams for spectating based on recognized or predicted gaming activity

Also Published As

Publication number Publication date
GB201820134D0 (en) 2019-01-23

Similar Documents

Publication Publication Date Title
US11975261B2 (en) Online software video capture and replay system
WO2022033198A1 (en) Virtual role interaction method and apparatus, computer device, and storage medium
US11351466B2 (en) System and method for customizing a replay of one or more game events in a video game
US8636589B2 (en) Systems and methods that enable a spectator's experience for online active games
US20170106283A1 (en) Automated generation of game event recordings
KR101629378B1 (en) Apparstus and method of providing replay movie in massively multiplayer online role playing game
US11704703B2 (en) Systems and methods for dynamically modifying video game content based on non-video gaming content being concurrently experienced by a user
JP2021072965A5 (en)
CN114225402A (en) Method and device for editing virtual object video in game
GB2579659A (en) Player interaction capturing system and method
CN110465074B (en) Information prompting method and device
CN111773702A (en) Control method and device for live game
Sabet The influence of delay on cloud gaming quality of experience
JP2023552744A (en) Dynamic camera angle adjustment in-game
JP2021074561A5 (en)
US20140106837A1 (en) Crowdsourcing to identify guaranteed solvable scenarios
CN116943204A (en) Virtual object control method and device, storage medium and electronic equipment
JP2022130494A (en) computer system
GB2557976A (en) Gameplay sharing method and apparatus
US11806630B1 (en) Profile-based detection of unintended controller errors
JP7100277B2 (en) Data processing system and data processing method
US20230121618A1 (en) Reactions of failed attempts during points of gameplay
US11443512B1 (en) Systems and methods for querying video information of electronic games
KR101870256B1 (en) Apparatus and method of authoring multimedia contents using play data of online game
US20230127685A1 (en) Gameplay roulette

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)