CN114630701A - Machine learning trust scoring based on sensor data - Google Patents

Machine learning trust scoring based on sensor data Download PDF

Info

Publication number
CN114630701A
CN114630701A CN202080073805.2A CN202080073805A CN114630701A CN 114630701 A CN114630701 A CN 114630701A CN 202080073805 A CN202080073805 A CN 202080073805A CN 114630701 A CN114630701 A CN 114630701A
Authority
CN
China
Prior art keywords
data
user account
video game
machine learning
sensor data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080073805.2A
Other languages
Chinese (zh)
Inventor
I·布里斯戴尔·谢泼德
S·多尔顿
R·凯特勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valve Corp
Original Assignee
Valve Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/663,041 external-priority patent/US11052311B2/en
Application filed by Valve Corp filed Critical Valve Corp
Publication of CN114630701A publication Critical patent/CN114630701A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • A63F13/235Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console using a wireless connection, e.g. infrared or piconet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • A63F13/48Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/67Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/71Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/73Authorising game programs or game devices, e.g. checking authenticity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/75Enforcing rules, e.g. detecting foul play or generating lists of cheating players

Abstract

The invention provides a trained machine learning model that is used to determine scores for user accounts registered with a video game service and to use these scores to match players together in a multiplayer video game setting. For example, sensor data received from client machines may be input to the trained machine learning model, and the model generates as output scores that are related to the probabilities that game control data received from those client machines was generated by the handheld device, rather than having been synthesized and/or modified using software. In this manner, a subset of logged-in user accounts executing the video game may be assigned to different matches based at least in part on the scores determined for those logged-in user accounts (e.g., by isolating non-human players from human players), and the video game is executed in the assigned matches for each logged-in user account.

Description

Machine learning trust scoring based on sensor data
Cross Reference to Related Applications
This PCT application claims priority from pending U.S. patent application serial No. 16/663,041 entitled "MACHINE-study true score bound FOR SENSOR DATA" filed 24.10.2019, which is in accordance with 35u.s.c. § 120, pending U.S. patent application serial No. 16/125,224 entitled "MACHINE-study score FOR PLAYER MATCHMAKING" filed 7.9.2018 as a continuation-in-part application. Each of application serial numbers 16/663,041 and 16/125,224 is incorporated by reference herein in its entirety.
Background
The constant or nearly constant availability of wide area network communications, coupled with the increase in client machine functionality, has led to an increasing popularity of online multiplayer video games. In these multiplayer video games, the video game service may use a matching system to match players together by group so that grouped players may play the video game together in a multiplayer mode. One popular type of video game that players often play in a multiplayer mode is the first person shooter type. In this exemplary type, two or more teams of players may compete in rounds with the goal of winning enough rounds to win a match. Players of the same team may struggle together to achieve the goal while competing with players of the opposing team.
Although most video game players do not participate in cheating activities, there are generally a small number of players who cheat in order to gain more advantage than others. Often, cheating players employ third-party software that provides them some informational or mechanical advantage over other players. For example, third-party software may be configured to extract location data about other player locations and may present these locations to cheating players. This information advantage allows cheating players to swipe across other players or otherwise take advantage of the location information revealed by third-party software. A more obvious cheating approach is to use third party software that can detect the position of another player and automatically operate the action of the cheater's avatar (e.g., by programmatically moving a mouse cursor to the target player and launching a weapon in an automated fashion). In other words, some players cheat by having third party software instead of them play the video game that uses computerized algorithms that mechanically enhance the cheating player's performance.
Often, poor player behavior (such as cheating) can disrupt the gaming experience of a player who wants to legally play a game. Thus, if a player intending to participate in bad behavior matches another player exhibiting good behavior, their multiplayer video game experience may be disrupted for well-behaved players. Systems currently in use that attempt to identify players who may be engaged in bad behaviour in the future are somewhat inaccurate in prediction. These systems are also largely static in the sense that they are based on statically defined rules and inputs, which means that they must be manually adjusted to change the output of the system. The disclosure made herein is set forth with respect to these and other considerations.
Drawings
Specific embodiments are described with reference to the accompanying drawings. In these figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items or features.
FIG. 1 is a schematic diagram illustrating an exemplary environment including a remote computing system configured to train and use a machine learning model to determine trust scores related to possible behaviors of players and to match players together with trust scores based on machine learning.
FIG. 2 shows a block diagram illustrating exemplary components of the remote computing system of FIG. 1, and a diagram illustrating how machine learned trust scores are used for player matching.
FIG. 3 is a flow diagram of an exemplary process for training a machine learning model to predict a probability of a player behaving or not behaving in a particular manner.
FIG. 4 is a flow diagram of an exemplary process for utilizing a trained machine learning model to determine trust scores for user accounts that are related to a probability of a player behaving or not behaving as specified.
FIG. 5 is a flow diagram of an exemplary process for assigning user accounts to different matches of a multiplayer video game based on machine learned trust scores that are related to possible player behavior.
Fig. 6 is a schematic diagram illustrating an exemplary environment that includes a block diagram illustrating exemplary components of a handheld device, and fig. 6 illustrates how sensor data received by a remote computing system is used for trust scoring for machine learning.
FIG. 7 is a flow diagram of an exemplary process for training a machine learning model using data (including sensor data) to predict the probability that game control data received from a client machine has been generated by a handheld device.
FIG. 8 is a flow diagram of an exemplary process for determining trust scores for user accounts using a trained machine learning model, the trust scores being related to a probability that game control data has been generated by a handheld device.
FIG. 9 is a flow diagram of an exemplary process for assigning user accounts to different matches of a multiplayer video game based on machine learned trust scores that are related to the probabilities that game control data has been generated by a handheld device.
Detailed Description
Described herein are, among other things, techniques, devices, and systems for generating trust scores using machine learning methods and then matching players together in a multiplayer video game setting using machine learned trust scores. The disclosed technology may be implemented, at least in part, by a remote computing system that distributes video games (and their content) to client machines of a community of users as part of a video game service. These client machines may individually install a client application configured to execute a video game received (e.g., downloaded, streamed, etc.) from a remote computing system. The video game platform enables registered users of the community to play video games as "players". For example, a user may load a client application, log in with a registered user account, select a desired video game, and execute the video game on his/her client machine via the client application.
Data may be collected by a remote computing system each time the user accesses and uses the video game platform, and may be maintained by the remote computing system associated with the registered user account. Over time, it may be recognized that a large set of historical data associated with registered user accounts is available for use by the remote computing system. The remote computing system may then train one or more machine learning models using a portion of the historical data as training data. For example, a portion of the historical data associated with the sampled set of user accounts may be represented by a set of features and tagged to indicate players who performed in a particular manner in the past while playing the video game. A machine learning model trained with this data is able to predict player behavior by outputting machine learning scores (e.g., trust scores) for user accounts registered with the video game service. These machine learning scores may be used for player matching so that players who may or may not be performing in a particular performance may be grouped together in a multiplayer video game setting.
In an exemplary process, a computing system may determine scores (e.g., trust scores) for a plurality of user accounts registered with a video game service. The individual scores may be determined by: the method may include accessing data associated with a single user account, providing the data as input to the trained machine learning model, and generating a score associated with the single user account as an output of the trained machine learning model. The score is related to a probability that a player associated with the single user account performed or did not perform at a particular performance while playing one or more video games in a multiplayer mode. The computing system may then receive information from the plurality of client machines indicating logged-in user accounts logged into client applications executing the video game, and the computing system may define matches into which a plurality of players are to be grouped for playing the video game in a multiplayer mode. The plurality of matches may include at least a first match and a second match. The computing system may assign a first subset of the logged-in user accounts to a first match and a second subset of the logged-in user accounts to a second match based at least in part on the scores determined for the logged-in user accounts. And may cause a first subset of client machines associated with the first subset of logged-in user accounts to execute the video game in the first match while causing a second subset of client machines associated with the second subset of logged-in user accounts to execute the video game in the second match.
Because human players who want to legally play video games often utilize handheld devices (e.g., handheld game controllers) to play video games, and because software configured to synthesize and/or modify game control data for cheating purposes does not use handheld devices to generate synthesized and/or modified game control data, sensor data associated with one or more sensors of the handheld devices can be utilized using machine-learned ratings in order to distinguish legitimate human players from software programmed to cheat, such that players who cheat using software can be isolated from legitimate human players. Accordingly, described herein are techniques, devices, and systems for receiving game control data and sensor data from one or more client machines interacting with a video game platform and providing at least the sensor data as input to a trained machine learning model to generate a trust score associated with a user account. In this case, the trust score may be related to the probability that game control data received from a given client machine was generated by the handheld device associated with the given client machine. Thus, these trust scores may be used for player matching to mitigate a situation in which a non-human player (e.g., software designed for cheating) matches a human player who wants to legally play a video game with one or more other human players.
To illustrate, whenever a user plays a video game via a video game platform, they may use a handheld device (such as a handheld game controller) to control aspects of the video game. A given handheld device may include one or more sensors such as, but not limited to, a gyroscope, an accelerometer, and/or a touch sensor, among other possible sensors. Touch sensors (e.g., capacitive pads) may be disposed, for example, below or on a surface of the device and/or within or on the finger-operated controls. The touch sensor may be configured to detect proximity of a finger to the surface or finger-operated control, and in response, the touch sensor may generate sensor data indicative of the proximity of the finger to the touch sensor. As another example, a gyroscope and/or accelerometer mounted in a housing of a handheld device may detect movement of the handheld device in different directions (e.g., movement such as movement by translation, rotation, and/or tilt), and in response, the gyroscope and/or accelerometer may generate sensor data indicative of a characteristic of the movement. In general, this sensor data may be used for various purposes, such as to control aspects of a video game (e.g., to control a player-controlled character, to rotate a virtual camera indicating content visible on a display, etc.). When sensor data is used as game control data in this manner, in some cases, the sensor data may be changed (e.g., filtered/attenuated, amplified, etc.) before being used to control aspects of the video game. However, the techniques and systems described herein relate to a handheld device configured to transmit raw, unfiltered sensor data generated by one or more sensors of the handheld device to a remote computing system (e.g., by transmitting the sensor data via an associated client machine). The raw sensor data may be transmitted along with the game control data (e.g., data generated by button presses, joystick deflections, etc., and altered sensor data) that is to be processed for controlling aspects of the video game.
Over time, as a user interacts with a video game platform using a handheld device having one or more sensors, as described above and elsewhere herein, a large set of historical sensor data and game control data may be associated with registered user accounts and used to train a machine learning model. For example, the remote computing system may use a portion of the historical sensor data as training data to train one or more machine learning models. For example, a portion of the historical sensor data associated with a sampling set of user accounts may be represented by a set of features, and when a human player holds and operates a handheld device to generate game control data, each user account in the sampling set may be flagged to indicate whether historical game control data associated with that user account was generated by a real handheld device. A machine learning model trained with this sensor data is able to determine whether game control data associated with a particular user account was generated by a physical handheld device, rather than having been synthesized and/or modified by software. In this manner, the machine learning model may detect cheating behaviors based on sensor data provided as input to the machine learning model. For example, when a human operates a handheld device, such as a handheld game controller, there may be subtle movements of the handheld device that are unintentional movements, yet which may manifest in sensor data generated by one or more physical sensors (e.g., touch sensors, gyroscopes, and/or accelerometers, etc.) of the handheld device. The machine learning model may be trained using at least the sensor data as training data to distinguish between human players and non-human players. For example, by identifying nuances in new sensor data received from a client machine, a machine learning model may output a trust score that indicates that corresponding game control data has been generated by a physical sensor of a real handheld device (such as a device mechanically operated (e.g., by a human user)). Conversely, due to a failure to receive sensor data from another client machine, or to identify nuances in sensor data received from another client machine that are expected to be manifested in the sensor data, the machine learning model may output a trust score that indicates that corresponding game control data (such as game control data synthesized and/or modified by a so-called "auto-aim robot" (aimbot) that programmatically generates game control data to aim and fire a weapon of a player-controlled character in an automated manner and without human intervention) has been synthesized and/or modified using software. Thus, using machine learning scores for player matching may help avoid matching human players (who wish to legally play video games) with non-human players (such as software designed to cheat in a multiplayer mode of a video game).
In an example process, a computing system may receive game control data and sensor data from a client machine and may determine a score (e.g., a trust score) for a user account associated with the received data and registered with a video game service. For example, the computing system may provide the received sensor data as input to the trained machine learning model and generate a score associated with the user account as an output of the trained machine learning model. The score is related to a probability that the received game control data has been generated by a handheld device associated with the client machine. Based on the score, the computing system may assign the user account to an assigned match of a plurality of matches, the plurality of matches including at least one of a first match specified for a human player or a second match specified for a non-human player. Thus, the video game may be executed for the user account in the assigned match.
The techniques and systems described herein may provide an improved gaming experience for users who wish to play video games in a multi-player mode in a desired manner. This is because the techniques and systems described herein are able to match players (including non-human players, such as "auto-targeting robots") that may perform poorly (e.g., cheating) together and to isolate these players from other trusted players that may legally play the video game. For example, a trained machine learning model may learn to predict which players are likely to cheat, and which players are less likely to cheat, by attributing to the user accounts corresponding trust scores that indicate each player's propensity to cheat (or not cheat). In this manner, players with low (e.g., below a threshold) trust scores may be matched together and may be isolated from other players whose user accounts are deemed to have high (e.g., above a threshold) trust scores, leaving trusted players to play in the tournament without any potentially cheating players. Although the use of threshold scores is described as providing one exemplary way of matching assignments, other techniques are also contemplated, such as clustering algorithms, or other statistical methods that use trust scores to preferentially match user accounts (players) with "similar" trust scores together (e.g., based on similarity metrics, such as distance metrics, variance metrics, etc.).
The techniques and systems described herein also improve upon existing matching techniques that use static rules to determine a user's trust level. However, machine learning models can learn complex relationships that identify player behavior to better predict player behavior, which is not possible with static rule-based approaches. Thus, the techniques and systems described herein allow for the generation of trust scores that predict player behavior more accurately, resulting in lower false positive rates and fewer instances of players being defined by inaccurate trust scores, as compared to existing trust systems. The techniques and systems described herein are also more adaptive to changing player behavior dynamics than existing systems because the machine learning model may be retrained with new data to adapt the machine learning model to the understanding of player behavior over time as player behavior changes. The techniques and systems described herein may also allow one or more devices to conserve resources with respect to processing resources, memory resources, network resources, and the like in the various manners described herein.
It should be appreciated that although many of the examples described herein refer to "cheating" as a goal behavior by which players may be scored and grouped for matching purposes, the techniques and systems described herein may be configured to use machine-learned scoring methods to identify any type of behavior (good or bad) and to predict the likelihood of a player participating in that behavior for player matching purposes. Thus, these techniques and systems may extend beyond the concept of "trust" scoring in the context of bad behavior (e.g., cheating), and may more broadly attribute the score to user accounts indicating compatibility or affinity between players.
FIG. 1 is a schematic diagram illustrating an exemplary environment 100 including a remote computing system configured to train and use a machine learning model to determine trust scores related to possible behaviors of players and to match players together with trust scores based on machine learning. A community of users 102 may be associated with one or more client machines 104. The client machines 104(1) - (N) shown in fig. 1 represent computing devices that may be used by the user 102 to execute programs, such as video games, thereon. Thus, the user 102 shown in fig. 1 is sometimes referred to as a "player" 102, and these names are used interchangeably herein to refer to a human operator of the client machine 104. The client machine 104 may be implemented as any suitable type of computing device configured to execute a video game and render graphics on an associated display, including but not limited to: a Personal Computer (PC), a desktop computer, a laptop computer, a mobile phone (e.g., a smartphone), a tablet computer, a Portable Digital Assistant (PDA), a wearable computer (e.g., a Virtual Reality (VR) headset, an Augmented Reality (AR) headset, smart glasses, etc.), an in-vehicle (e.g., in-vehicle) computer, a television (smart television), a set-top box (STB), a gaming console, and/or any similar computing device. Further, the client machines 104 may differ in their respective platforms (e.g., hardware and software). For example, the plurality of client machines 104 shown in fig. 1 may represent different types of client machines 104 having different capabilities in terms of processing capabilities (e.g., Central Processing Unit (CPU) model, Graphics Processing Unit (GPU) model, etc.), graphics driver versions, etc.
The client machine 104 may communicate with a remote computing system 106 (sometimes referred to herein simply as "computing system 106" or "remote system 106") over a computer network 108. The computer network 108 may represent and/or include, but is not limited to: the internet, other types of data and/or voice networks, wired infrastructure (e.g., coaxial cable, fiber optic cable, etc.), wireless infrastructure (e.g., Radio Frequency (RF), cellular, satellite, etc.), and/or other connection technologies. In some cases, the computing system 106 may be part of a network-accessible computing platform maintained and accessible via a computer network 108. Network-accessible computing platforms such as these may be referred to by terms such as "compute on demand," software as a service (SaaS), "platform computing," "network-accessible platform," "cloud services," "data centers," and so forth.
In some embodiments, the computing system 106 acts as or has access to a video game platform that implements a video game service to distribute (e.g., download, stream, etc.) the video game 110 (and its content) to the client machine 104. In an example, the client machines 104 may each have installed thereon a client application. The installed client application may be a video game client (e.g., game software for playing video game 110). The client machine 104 with the installed client application program may be configured to download, stream, or otherwise receive the program (e.g., the video game 110 and its content) from the computing system 106 over the computer network 108. Any type of content distribution model may be used for this purpose, such as a direct purchase model in which programs (e.g., video games 110) are separately available for download and execution on the client machine 104, where the programs are rented or rented for a period of time, streamed, or otherwise available to the content distribution model of the client machine 104. Thus, a single client machine 104 may include one or more installed video games 110 that may be executed by loading a client application.
As indicated by reference numeral 112 of FIG. 1, the client device 104 may be used to register with and subsequently log into a video game service. The user 102 may create a user account for this purpose and specify/set credentials (e.g., password, PIN, biometric ID, etc.) that are bound to the registered user account. When multiple users 102 interact with the video game platform (e.g., by accessing their user/gamer profiles with registered user accounts, playing a video game 110 on their respective client machines 104, etc.), the client machines 104 send data 114 to the remote computing system 106. For a given client machine 104, the data 114 sent to the remote computing system 106 may include, but is not limited to: user input data, video game data (e.g., game performance statistics uploaded to a remote system), social networking messages and related activities, an Identifier (ID) of the video game 110 played on the client machine 104, and the like. The data 114 may be streamed in real-time (or substantially real-time), sent to the remote system 106 at defined intervals, and/or uploaded in response to an event (e.g., exiting a video game).
FIG. 1 illustrates that the computing system 106 may store data 114 it collects from the client machines 104 in a data store 116, which may represent a data store maintained by and accessible to a remote computing system 106. Data 114 may be organized within data store 116 in any suitable manner to associate user accounts with relevant portions of data 114 that are relevant to those user accounts. Over time, in view of the large community of users 102 who sometimes interact frequently with the video game platform for a long period of time during a given session, a large amount of data 114 may be collected and maintained in the data store 116.
At step 1 in fig. 1, computing system 106 may train a machine learning model using historical data 114 sampled from data store 116. For example, the computing system 106 may access a portion of the historical data 114 associated with a sampled set of user accounts registered with the video game service and train the machine learning model using the sampled data 114. In some embodiments, the portion of the data 114 used as training data is represented by a set of features, and each user account of the sample set is labeled with a label indicating whether the user account is associated with a player that has performed in a particular performance when playing at least one video game in the past. For example, if a player with a particular user account was prohibited by the video game service in the past due to cheating, the "prohibition" may serve as one of the plurality of category labels for the particular user account. In this manner, supervised learning methods may be employed to train machine learning models to predict players who may cheat in the future.
At step 2, the computing system 106 may score a plurality of registered user accounts using the trained machine learning model. For example, the computing system 106 may access the data 114 associated with the plurality of registered user accounts from the data store 116, provide the data 114 as input to the trained machine learning model, and generate scores associated with the plurality of user accounts as output of the trained machine learning model. These scores (sometimes referred to herein as "trust scores" or "trust factors") are related to the probability that a player associated with multiple user accounts will perform or not perform at a particular performance when playing one or more video games in a multiplayer mode. In the case of "bad" behavior (such as cheating), the trust score may be related to the probability that the player does not cheat. In this case, a high trust score indicates a trusted user account, while a low trust score indicates an untrusted user account, which may be used as an indicator of a player who may exhibit undesirable behavior (such as cheating). In some embodiments, the score is a variable normalized over a range of [0, 1 ]. The confidence score may have a monotonic relationship with the probability that the player will behave in a particular manner (or not, as the case may be) while playing the video game 110. The relationship between the score and the actual probability associated with a particular behavior, while monotonic, may or may not be a linear relationship. Of course, scoring may be implemented in any suitable manner to predict whether a player bound to a user account will behave in a particular manner. Fig. 1 shows a plurality of user accounts 120 that have been scored according to the techniques described herein. For example, for any suitable number of registered user accounts 120, the first score 118(1) (score ═ X) is attributed to the first user account 120(1), the second score 118(2) (score ═ Y) is attributed to the second user account 120(2), and so on.
Where machine learning scores 118 are determined for a plurality of registered user accounts 120, computing system 106 may be configured to match players together in a multiplayer video game setting based at least in part on machine learning scores 118. For example, FIG. 1 shows that the computing system 106 may accept information 122 from a plurality of client machines 104 that have begun executing the video game 110. This information 122 may indicate to the computing system 106: a set of logged-in user accounts 120 is currently executing the video game 110 on each client machine 104 via an installed client application. In other words, when a player 102 logs in with their user account 120 and begins executing a particular video game 110, requesting play in a multiplayer mode, their respective client machine 104 may provide information 122 to the computing system 106 indicating as much content as possible.
At step 3, in response to the information 122 received from the client machine 104, the computing system 106 may match the players 102 together by defining a plurality of matches into which the players 102 are to be grouped for playing the video game 110 in the multiplayer mode and by providing a match assignment 124 to the client machine 104, so as to assign a subset of the logged-in user accounts 120 to different ones of the plurality of matches based at least in part on the machine learning scores 118 determined for the logged-in user accounts 120. In this manner, if the trained machine learning model assigns a low (e.g., below a threshold) score 118 to a first subset of the logged user accounts 120 and a high (e.g., above a threshold) score 118 to a second subset of the logged user accounts 120, the first subset of the logged user accounts 120 may be assigned to a first match of the plurality of matches and the second subset of the logged user accounts 120 may be assigned to a different second match of the plurality of matches. In this manner, a subset of players 102 having similar scores 118 may be grouped together and may remain isolated from other players having scores 118 that are different from the scores 118 of the subset of players 102.
At step 4, the client application on each client machine 104 executing the video game 110 may execute the video game 110 in the assigned match to the logged in user account 120 in question. For example, a first subset of client machines 104 may be associated with a first subset of logged-in user accounts 120 assigned to a first match, and the first subset of client machines 104 may execute video games 110 in the first match, while a second subset of client machines 104 (the second subset associated with a second subset of logged-in user accounts 120 assigned to a second match) may execute video games 110 in the second match. Where players are grouped into matches based at least in part on the machine-learned scores 118, the gaming experience of at least some of the groups of players 102 may be improved because the system may group players who are expected to perform poorly (e.g., through cheating) into the same match, and by doing so, may isolate the misbehaving player from other players who wish to legally play the video game 110.
FIG. 2 shows a block diagram illustrating exemplary components of the remote computing system 106 of FIG. 1, and a diagram of how machine learned trust scores are used for player matching. In the illustrated implementation, computing system 106 includes one or more processors 202 (e.g., Central Processing Units (CPUs)), memory 204 (or non-transitory computer-readable medium 204), and communication interface 206, among other components. Memory 204 (or non-transitory computer-readable media 204) may include volatile and non-volatile memory, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The computer-readable medium 204 may be implemented as a computer-readable storage medium ("CRSM"), which may be any available physical medium that the processor 202 may access to execute instructions stored on the memory 204. In a basic implementation, the CRSM may include random access memory ("RAM") and flash memory. In other implementations, the CRSM may include, but is not limited to, read only memory ("ROM"), electrically erasable programmable read only memory ("EEPROM"), or any other tangible medium that can be used to store the desired information and that can be accessed by the processor 202. The video-game service 208 may represent instructions stored in the memory 204 that, when executed by the processor 202, cause the computing system 106 to perform the techniques and operations described herein.
For example, the video game service 208 may include a training component 210, a scoring component 212, and a matching component 214, among other possible components. The training component 210 may be configured to train the machine learning model using a portion of the data 114 in the data store 116 as training data, the portion of the data being associated with a sample set of the user accounts 120 to obtain a trained machine learning model 216. The trained machine learning model 216 may be used by the scoring component 212 to determine scores 118 (e.g., trust scores 118) for a plurality of registered user accounts 120. The matching component 214 provides the match assignment 124 based at least in part on the machine-learned score 118 such that players are grouped into different ones of a plurality of matches 218 (e.g., a first match 218(1), a second match 218(2), etc.) for playing the video game 110 in the multiplayer mode. FIG. 2 illustrates how a first set of players 102 (e.g., ten players 102) associated with a first subset of logged-in user accounts 120 are assigned to a first match 218(1), and how a second set of players 102 (e.g., another ten players 102) associated with a second subset of logged-in user accounts 120 are assigned to a second match 218 (2). Of course, any number of matches 218 may be defined, which may depend on the number of logged-in user accounts 120 requesting execution of the video game 110 in the multiplayer mode, capacity limits of network traffic that may be handled by the computing system 106, and/or other factors. In some embodiments, other factors (e.g., skill level, geographic area, etc.) that may result in the player being further subdivided and/or being subdivided into a fewer or greater number of matches 218 are considered in the matching process.
As mentioned, the score 118 determined by the scoring component 212 (e.g., output by the trained machine learning model 216) is the machine learning score 118. Machine learning typically involves processing a set of samples (referred to as "training data") in order to train a machine learning model. Once trained, the machine learning model 216 is a learning mechanism that can receive new data as input and estimate or predict results as output. For example, the trained machine learning model 216 may include a classifier that is responsible for classifying an unknown input (e.g., an unknown image) into one of a plurality of classification tags (e.g., labeling an image as a cat or a dog). In some cases, the trained machine learning model 216 is configured to implement multi-label classification tasks (e.g., labeling images as "cat," "dog," "duck," "penguin," etc.). Additionally or alternatively, the trained machine learning model 216 may be trained to infer a probability or a set of probabilities for a classification task based on unknown data received as input. In the context of the present disclosure, the unknown input may be data 114 associated with a single user account 120 registered with the video game service, and the trained machine learning model 216 is responsible for outputting a score 118 (e.g., a confidence score 118) that indicates or is otherwise related to the probability that the single user account 120 belongs to one of a plurality of classifications. For example, the score 118 may be related to a probability that a player 102 associated with a single user account 120 will perform (or not perform, as the case may be) in a particular performance while playing the video game 110 in the multiplayer mode. In some embodiments, the score 118 is a variable normalized over a range of [0, 1 ]. The trust score 118 may have a monotonic relationship with the probability that the player 102 will behave in a particular manner (or not, as the case may be) while playing the video game 110. The relationship between the score 118 and the actual probability associated with a particular behavior, while monotonic, may or may not be linear. In some embodiments, trained machine learning model 216 may output a set of probabilities (e.g., two probabilities) or scores associated therewith, where one probability (or score) is associated with a probability that player 102 performed in accordance with a particular performance and the other probability (or score) is associated with a probability that player 102 did not perform in accordance with the particular performance. The score 118 output by the trained machine learning model 216 may be correlated with any of these probabilities in order to guide the matching process. In an illustrative example, the particular behavior may be cheating. In this example, the score 118 output by the trained machine learning model 216 is related to the likelihood that the player 102 associated with the single user account 120 will or will not continue to cheat during the course of playing the video game 110 in the multiplayer mode. Thus, in some embodiments, the score 118 may indicate a level of trustworthiness of the player 102 associated with a single user account 120, and that is why the score 118 described herein is sometimes referred to as a "trust score" 118.
The trained machine learning model 216 may represent a single model or a collection of basic-level machine learning models, and may be implemented as any type of machine learning model 216. For example, suitable machine learning models 216 for use with the techniques and systems described herein include, but are not limited to: neural networks, tree-based models, Support Vector Machines (SVMs), nuclear methods, random forests, splines (e.g., multivariate adaptive regression splines), Hidden Markov Models (HMMs), kalman filters (or enhanced kalman filters), bayesian networks (or bayesian belief networks), expectation maximization, genetic algorithms, linear regression algorithms, non-linear regression algorithms, classification models based on logistic regression, or a collection thereof. The "set" may include a set of machine learning models 216 whose outputs (predictions) are combined, such as by using weighted averages or voting. Each machine learning model in the set may differ in their expertise, and the set may operate as a committee for each machine learning model that is "smarter" overall than any single machine learning model of the set.
The training data used to train the machine learning model 216 may include various types of data 114. In general, training data for machine learning may include two components: features and labels. However, in some embodiments, the training data used to train the machine learning model 216 may be unlabeled. Accordingly, machine learning model 216 may be trained using any suitable learning technique, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and the like. The features included in the training data may be represented by a set of features, such as in the form of an n-dimensional feature vector of quantifiable information about attributes of the training data. The following is a list of exemplary features that may be included in training data used to train machine learning model 216 described herein. However, it should be understood that the following list of features is non-exhaustive, and that features used in training may include additional features not described herein, and in some cases, some (but not all) of the features listed herein. Exemplary features included in the training data may include, but are not limited to: the amount of time a player spends playing a video game 110 in general, the amount of time a player spends playing a particular video game 110, the number of times a player logs in and plays a video game 110 a day, the gamer's tournament history data (e.g., total scores (per tournament, round, etc.), explosion rate, number of hits, number of deaths, number of aids, player rankings, etc.), the number and/or frequency of player cheating reports, the number and/or frequency of times a player's cheating is convicted, a confidence value (score) output by a machine learning model that detected a cheating player during a video game, the number of multiple user accounts 120 associated with a single player (which may be inferred from common addresses, telephone numbers, payment instruments, etc. bound to multiple user accounts 120), how long a user account 120 has been registered with a video game service, how long, and the user accounts have been registered with a video game service, The number of previously prohibited user accounts 120 bound to the player, the number and/or frequency of monetary transactions by the player on the video game platform, the amount per transaction, the number of digital products with monetary value associated with the player's user account 120, the number of times the user account 120 has been bartered (e.g., transferred between different owners/players), the frequency with which the user account 120 has been transferred between players, the geographic location from which the player has logged into the video game service, the number of different payment instruments, phone numbers, mailing addresses, etc. that have been associated with the user account 120, and/or how often these items have been altered, and/or any other suitable feature that may be relevant to calculating a trust score 118 that indicates a player's propensity to engage in a particular activity. As part of the training process, the training component 210 can set weights for machine learning. These weights may be applied to a set of features included in the training data, as derived from historical data 114 in data store 116. In some embodiments, the weights set during the training process may be applied to parameters inside the machine learning model (e.g., weights of neurons in hidden layers of the neural network). These internal parameters of the machine learning model may or may not be mapped one-to-one with each input feature in the set of features. These weights may indicate the impact of any given feature or parameter on the score 118 output by the trained machine learning model 216.
Particularly with regard to cheating (which is an illustrative example of the type of behavior that may be used as a basis for matching players), there may be behaviors associated with the player's user account 120 that are planning to cheat in the video game 110 that are different from behaviors associated with the user accounts 120 of non-cheating players. Accordingly, machine learning model 216 may learn to identify those behavioral patterns from training data so that potentially cheating players may be identified with high confidence and scored appropriately. It should be understood that outliers may exist in the ecosystem, and the system may be configured to protect these outliers based on some known information about the outliers. For example, professional players may exhibit different behaviors than regular players, and these professional players may be at risk of being wrongly scored. As another example, employees of a service provider of a video game service may log in with a user account for survey purposes or quality control purposes, and may behave in a manner different from ordinary player behavior. These types of players/users 102 may be considered outliers and, outside of the context of machine learning, are proactively assigned a score 118 that gives those players/users 102 a high degree of trust. In this manner, known professional players, employees of the service provider, etc. may be assigned an authoritative score 118 that may not be modified by scoring component 212 to avoid matching those players/users 102 with misbehaving players.
The training data may also be labeled for use in supervised learning methods. Also, using cheating as an exemplary type of activity that may be used to match players together, the tag in this example may indicate whether the user account 120 is prohibited from playing the video game 110 via the video game service. The data 114 in the data store 116 may include some data 114 associated with players who have been prohibited by cheating, as well as some data 114 associated with players who have not been prohibited by cheating. One example of this type of ban is the Virfur Anti-cheating System (Valve Anti-Cheat (VAC)) ban used by Virfur Corporation of Bellevvue, Washington, Bellevue. For example, the computing system 106 and/or an authorized user of the computing system 106 may be able to detect when unauthorized third party software has been used for cheating. In these cases, the cheating user account 120 may be disabled by marking it as disabled in the data store 116 after a rigorous validation process to ensure no errors are made. Thus, depending on the status of whether the user account 120 has been disabled or not, it may be used as a positive and negative training example.
It should be appreciated that past player behavior (such as past cheating behavior) may be indicated in other ways. For example, the user 102 or even a separate machine learning model may be provided with a mechanism for detecting and reporting suspected cheating players. These reported players may be placed in front of a co-panel consisting of their peers who review the reported players' game playbacks and make an adjudication (e.g., cheating or not). If enough other players deem the reported player's behavior to be equivalent to cheating, a high confidence threshold may be reached and the reported player is judged to be cheating and receives a ban on their user account 120, which may also constitute a label for the associated training data.
FIG. 2 shows examples of other activities besides cheating that may be used as a basis for player matching. For example, the trained machine learning model 216 may be configured to output a confidence score 118 that correlates the probability that a player performs in a relinquishing game behavior (e.g., by relinquishing (or quitting) a video game in the middle of a tournament) or does not perform in a relinquishing game behavior. Just like cheating, surrendering a game is an act that tends to disrupt the gaming experience of a non-surrendering player. As another example, the trained machine learning model 216 may be configured to output a trust score 118 that is related to a probability that a player behaves according to sadness behavior (griefing behavior) or does not behave according to sadness behavior. A "griffer" is a player in a multiplayer video game that deliberately irritates and harasses other players within the video game 110, which may disrupt the gaming experience of a non-grief player. As another example, the trained machine learning model 216 may be configured to output a trust score 118 that correlates to a probability that a player behaves in a vulgar language or not. In general, multiplayer video games allow players to participate in chat sessions or other social network communications that are visible to other players in the video game 110, and when players use vulgar language (e.g., dirty words, offensive language, etc.), this can disrupt the game experience of players that do not use the vulgar language. As yet another example, the trained machine learning model 216 may be configured to output a confidence score 118 that correlates to a probability that a player performs or does not perform in "high skill". In this manner, a high skill player or a novice player may be identified from a group of players using the score. This helps prevent an experienced game player from creating a new user account that is a situation where a new skill level player is likely to be available to play with an amateur player. Thus, players that match together in the first match 218(1) may be those players that are likely to perform in a particular "bad" performance (as determined from the machine learning score 118), while players that match together in other matches (such as the second match 218(2)) may be those players that are less likely to perform in a particular "bad" performance.
This may be the case where the distribution of trust scores 118 output for multiple players (user accounts 120) is predominantly bimodal. For example, one peak of the statistical distribution of scores 118 may be associated with players who are likely to perform at a particular bad performance, while other peaks of the statistical distribution of scores 118 may be associated with players who are less likely to perform at the bad performance. In other words, the number of poorly behaving players and the number of well behaving players can be separated by a significant difference in the statistical distribution. In this sense, if a new user account is registered with the video game service and assigned a confidence score between the two peaks of the statistical distribution, the user account will be driven quickly in one way or another when the player interacts with the video game platform. Because of this trend, the matching parameters used by the matching component 214 may be adjusted to similarly treat players with trust scores 118 that are not within the bottom peak of the statistical distribution, and the matching component 214 may be primarily concerned with separating/isolating misbehaving players with trust scores 118 that are within the bottom peak of the statistical distribution. Although the use of threshold scores is described herein as providing one exemplary way of matching assignments, other techniques are also contemplated, such as clustering algorithms, or other statistical methods that use trust scores to preferentially match user accounts (players) with "similar" trust scores together (e.g., based on similarity metrics, such as distance metrics, variance metrics, etc.).
Further, the matching component 214 may not use player matching for new user accounts 120 that have recently registered with the video game service. Instead, rules may be implemented that accumulate a certain level of experience/performance in the personal gaming mode before a player is scored and matched with other players in the multiplayer mode of the video game 110. It should be appreciated that the path from the first time the video game 110 is launched to the player who has the time to have access to the player match is very different for different players. Some players may take a long time to enter the multi-player mode by matching, while other users may easily and quickly complete the qualification process. With this qualification process in place, the user account to be scored for matching purposes will play the video game and provide enough data 114 to give an accurate score 118 for the user account 120.
Also, as mentioned, while many of the examples described herein target "bad" behavior (such as cheating, abandoning games, sadness, vulgar language, etc.) by which players may be scored and grouped for matching purposes, the techniques and systems described herein may be configured to identify any type of behavior using a machine-learned scoring method and predict the likelihood of a player participating in the behavior for player matching purposes.
The processes described herein are illustrated as a collection of blocks in a logical flow graph, which represent a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the blocks represent computer-executable instructions that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and so forth that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described blocks can be combined in any order and/or in parallel to implement the processes.
FIG. 3 is a flow diagram of an exemplary process 300 for training a machine learning model to predict a probability of a player behaving or not behaving in a particular manner. For discussion purposes, the process 300 is described with reference to the preceding figures.
At 302, the computing system 106 may provide the user 102 with access to video game services. For example, the computing system 106 may allow the user to access and browse a catalog of the video game 110, modify a user profile, conduct transactions, participate in social media activities, and other similar and/or related actions. The computing system 106 may distribute the video game 110 (and its content) to the client machine 104 as part of a video game service. In an illustrative example, a user 102 having access to a video game service may load an installed client application, log in with a registered user account, select a desired video game 110, and execute the video game 110 on his/her client machine 104 via the client application.
At 304, the computing system 106 may collect and store data 114 associated with the user account 120 registered with the video game service. This data 114 may be collected at block 304 each time the user 102 accesses the video game service with their registered user account 120 and uses the video game platform, such as to play a video game 110 thereon. Over time, it may be recognized that a large set of data 114 associated with a registered user account is available to the computing system 106.
At 306, the computing system 106 may access (historical) data 114 associated with a sampling set of user accounts 120 registered with the video game service via the training component 210. At least some of the (historical) data 114 may have been generated as a result of a player playing one or more video games 110 on a video game platform provided by a video game service. For example, the (history) data 114 accessed at 306 may represent match history data for players who have played one or more video games 110 (e.g., by participating in a matching in a multiplayer mode). In some embodiments, the (historical) data 114 may indicate whether a sample set of user accounts 120 has been transferred between players, or other types of user activity with respect to the user accounts 120.
At 308, the computing system 106 may tag each user account 120 of the sampled set of user accounts 120 with a tag via the training component 210, the tag indicating whether the user account is associated with a player that has performed in accordance with a particular performance when playing at least one video game 110 in the past. Examples of tags are described herein, such as whether user accounts 120 have been prohibited in the past by cheating, which may be used as tags in the context of scoring players for their propensity to cheat or not (as appropriate). However, these tags may correspond to other types of behavior, such as tags that indicate whether the user account is associated with a player that has abandoned the game in the past, was sad during the game, used a vulgar language during the game in the past, and so forth.
At 310, the computing system 106 may train the machine learning model via the training component 210 using the (historical) data 114 as training data to obtain the trained machine learning model 216. As shown in sub-box 312, training of the machine learning model at block 310 may include setting weights for machine learning. These weights may be applied to a set of features derived from historical data 114. Exemplary features are described herein, such as those described above with reference to fig. 2. In some embodiments, the weights set at block 312 may be applied to parameters internal to the machine learning model (e.g., weights of neurons in hidden layers of the neural network). These internal parameters of the machine learning model may or may not be mapped one-to-one with each input feature in the set of features. As indicated by the arrow from block 310 to block 304, the machine learning model 216 may be retrained using the updated (historical) data 114 to obtain a new trained machine learning model 216 that is adapted to recent player behavior. This allows the machine learning model 216 to adapt to changing player behavior over time.
FIG. 4 is a flow diagram of an exemplary process 400 for utilizing a trained machine learning model 216 to determine trust scores 118 for user accounts 120, the trust scores 118 being related to (or indicative of) a probability that a player performed or did not perform in a particular manner. For discussion purposes, the process 400 is described with reference to the previous figures. Further, as shown by the page external reference "a" in fig. 3 and 4, process 400 may continue from block 310 of process 300.
At 402, the computing system 106 may access data 114 associated with a plurality of user accounts 120 registered with the video game service via the scoring component 212. The data 114 may include any information (e.g., quantifiable information) in the set of features that have been used to train the machine learning model 216, as described herein. This data 114 constitutes the unknown input to be input to the trained machine learning model 216.
At 404, the computing system 106 may provide the data 114 accessed at block 402 as input to the trained machine learning model 216 via the scoring component 212.
At 406, the computing system 106 may generate the trust scores 118 associated with the plurality of user accounts 120 as output of the trained machine learning model 216 via the scoring component 212. On an individual basis, the score 118 is associated with a single user account 120 of the plurality of usage accounts 120, and the score 118 is related to a probability that the player 102 associated with the single user account 120 performed or failed to perform at a particular performance while playing the one or more video games 110 in the multiplayer mode. In this case, the particular behavior may be any suitable behavior exhibited in data 114 such that the machine learning model may be trained to predict players having a tendency to participate in the behavior. Examples include, but are not limited to: cheating behavior, game abandoning behavior, sadness behavior or vulgar language behavior. In some embodiments, the score 118 is a variable normalized over a range of [0, 1 ]. The trust score 118 may have a monotonic relationship with the probability that the player will behave in a particular manner (or not, as the case may be) while playing the video game 110. The relationship between the score 118 and the actual probability associated with a particular behavior, while monotonic, may or may not be a linear relationship.
Accordingly, process 400 represents a machine learning scoring method in which scores 118 (e.g., trust scores 118) are determined for user accounts 120 that indicate the probability that a player using that user account 120 will engage in a particular behavior in the future. Using machine learning models in this scoring process allows complex relationships of player behavior to be identified to better predict player behavior than prior methods that attempt to predict player behavior. This may result in more accurate predictions of player behavior with a more adaptive and versatile system that can adjust to changing player behavior dynamics without human intervention.
Fig. 5 is a flow diagram of an exemplary process 500 for assigning user accounts 120 to different matches of a multiplayer video game based on machine learned trust scores 118, which relate to possible player behavior. For discussion purposes, the process 500 is described with reference to the previous figures. Further, as shown by the page external reference "B" in FIGS. 4 and 5, process 500 may continue from block 406 of process 400.
At 502, the computing system 106 may receive information 122 from a plurality of client machines 104, the information 122 indicating logged in user accounts 120 logged into a client application executing the video game 110 on each client machine 104. For example, multiple players 102 may have begun executing a particular video game 110 (e.g., a first-person shooter game) wishing to play the video game 110 in a multiplayer mode. The information 112 received 122 at block 502 may indicate at least the logged-in user accounts 120 for those players.
At 504, the computing system 106 may define, via the matching component 214, a match into which the players 102 are to be grouped for playing the video game 110 in the multiplayer mode. Any number of matches may be defined, depending on various factors, including demand, capacity, and other factors involved in the matching process. In an example, the plurality of defined matches at block 504 may include at least a first match 218(1) and a second match 218 (2).
At 506, the computing system 106 may assign, via the matching component 214, a first subset of the logged-in user accounts 120 to a first match 218(1) and a second subset of the logged-in user accounts 120 to a second match 218(2) based at least in part on the scores 118 determined for the logged-in user accounts 120. As mentioned, any number of matches may be defined such that further subdivisions and additional matches may be assigned to the user account at block 506.
As shown in sub-box 508, assigning user accounts 120 to different matches may be based on a threshold score. For example, the matching component 214 may determine that the scores 118 associated with the first subset of the logged-in user accounts 120 are less than a threshold score, and may assign the first subset of the logged-in user accounts 120 to a first match based on those scores being less than the threshold score. Likewise, matching component 214 can determine that the scores 118 associated with the second subset of the logged-in user accounts 120 are equal to or greater than the threshold score, and can assign the second subset of the logged-in user accounts 120 to a second match based on those scores being equal to or greater than the threshold score. However, this is merely one exemplary way to provide matching assignments 124, and other techniques are also contemplated. For example, clustering algorithms or other statistical methods may be used in addition to, or instead of, using threshold scores. In an example, the trust score 118 may be used to preferentially match together user accounts (players) having "similar" trust scores. Given the natural tendency of the distribution of trust scores 118 to be predominantly bimodal across the plurality of user accounts 120, grouping trust scores 118 together based on a similarity metric (e.g., distance metric, variance metric, etc.) may provide similar results to using a threshold score. However, where the matching pool is small (e.g., players who want to play less popular gaming patterns in a small geographic area), it may be useful to match user accounts together by group for matching purposes using a similarity metric, as this may provide a more granular approach that allows the relative importance of trust scores to other matching factors (such as skill level) to be adjusted step by step. In some embodiments, multiple thresholds may be used to "bucket divide (bucketize)" the user accounts 120 into multiple different matches.
As shown in sub-box 510, other factors besides the trust score 118 may be considered in the match assignment at block 506. For example, the match assignment 124 determined at block 506 may be further based on a skill level of a player associated with the logged-in user account, an amount of time the logged-in user account has been waiting to be placed in one of the plurality of matches, a geographic region associated with the logged-in user account, and/or other factors.
At 512, the computing system 106 may cause (e.g., by providing control instructions) a client application executing the video game 110 on each client machine 104 sending the information 122 to initiate one of the defined matches (e.g., one of the first match 218(1) or the second match 218(2)) based at least in part on the logged-in user account 120 associated with the client machine 104. For example, with a first match 218(1) and a second match 218(2) defined, the computing system 106 may cause a first subset of the plurality of client machines 104 associated with a first subset of the logged-in user accounts 120 to execute the video game 110 in the first match 218(1), and may cause a second subset of the plurality of client machines 104 associated with a second subset of the logged-in user accounts 120 to execute the video game 110 in the second match 218 (2).
Because the machine learned trust score 118 is used as a factor in the matching process, users who wish to play a video game in a multiplayer mode in an intended manner may be provided with an improved gaming experience. This is because the techniques and systems described herein can be used to match players that may perform poorly (e.g., cheat) together and isolate those players from other trusted players that may legally play the video game.
Fig. 6 is a schematic diagram illustrating an exemplary environment 600 that includes a block diagram illustrating exemplary components of a handheld device 602, and fig. 6 illustrates how sensor data 604 received by a remote computing system 106 is used for trust scoring for machine learning. The user 102 may use the handheld device 602 of FIG. 6 to interact with a video game platform as described herein. For example, the user 102 may pair the handheld device 602 with a client machine 104 on which a video game client (e.g., game software for playing the video game 110) is executing. Once handheld device 602 is paired with and able to communicate with client machine 104 (e.g., by sending/receiving data to/from client machine 104), handheld device 602 may be used to interact with the video game platform described herein, such as to register handheld device 602 with user account 120, access video game 110 available from remote computing system 106, and play video game 110 using handheld device 602.
The handheld device 602 may represent a handheld game controller, such as a game controller designed to be held by one or both hands of the user 102, and configured with one or more finger-operated controls operated by fingers and/or thumbs of the hand of the user 102. In some embodiments, the handheld device 602 may also be configured to operate by moving (e.g., translating, rotating, tilting, etc.) the game controller in a three-dimensional (3D) space. It should be understood that handheld device 602 may represent any other suitable type of handheld device, such as a mobile phone (e.g., a smartphone), a tablet computer, a Portable Digital Assistant (PDA), a wearable computer (e.g., a smartwatch, a Head Mounted Display (HMD)), a portable game console, and/or any similar handheld electronic device. The term "handheld" as used herein to describe handheld device 602 means a device configured to be held by user 102, regardless of whether the device is held by a hand of user 102 or by another part of the body of user 102 (e.g., a wearable device worn on a wrist, arm, leg, waist, head, etc. is considered a "handheld" device, as that term is used herein).
As shown in fig. 6, handheld device 602 includes one or more input/output (I/O) devices 608, such as finger-operated controls (e.g., joysticks, touch pads, triggers, depressible buttons, etc.), possibly other types of input or output devices, such as a touch screen, a microphone for receiving audio input (such as user voice input), a camera or other type of sensor (e.g., sensor 610) that may be used as an input device for receiving gesture input (such as motion of the hand of handheld device 602 and/or user 102), in some embodiments additional input devices may be provided in the form of a keyboard, keypad, mouse, touch screen, joystick, control buttons, etc., the input devices may also include control mechanisms, such as basic volume control buttons for increasing/decreasing volume, and a power and reset button. The input device may facilitate input of biometric data of the user 102, such as obtaining a fingerprint or palm print, scanning the user's eyes and/or face, capturing the user's voice, etc., for biometric identification/authentication of the user 102.
Meanwhile, the output device may include a display, a light emitting element (e.g., LED), a vibrator generating a tactile sensation, a speaker (e.g., headphone), and the like. There may also be a simple light emitting element (e.g., an LED) to indicate status, such as, for example, when energized. Although some examples have been provided, handheld device 602 may additionally or alternatively include any other type of output device. In some cases, the output of one or more output devices may be based on input received by one or more of the input devices. For example, actuation of a control may cause a vibrator located near (e.g., below) the control or in any other location to output a haptic response.
Further, handheld device 602 may include one or more communication interfaces 612 to facilitate wireless connectivity to a network and/or to one or more remote systems (e.g., client machine 104 executing an application program, a gaming machine, a wireless access point, etc.). The communication interface 612 may implement one or more of various wireless technologies such as Wi-Fi, bluetooth, Radio Frequency (RF), etc. It should be understood that handheld device 602 may also include a physical port to facilitate a wired connection to a network, connected peripheral devices, or plug-in network devices that communicate with other wireless networks.
In the particular implementation shown, the handheld device 602 also includes one or more processors 614 and computer-readable media 616. In some implementations, the processor 614 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), both a CPU and a GPU, a microprocessor, a digital signal processor, or other processing units or components known in the art. Alternatively or in addition, the functions described herein may be performed, at least in part, by one or more hardware logic components. Such as but not limited to: illustrative types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), etc. Additionally, each of the processors 614 may have its own local memory, which may also store program modules, program data, and/or one or more operating systems.
Computer-readable media 616 may include volatile and nonvolatile memory, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules or other data. Such memory includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, RAID storage systems, or any other medium which can be used to store the desired information and which can be accessed by a computing device. The computer-readable medium 616 may be implemented as a computer-readable storage medium ("CRSM"), which may be any available physical medium that is accessible to the processor 614 for execution of instructions stored on the computer-readable medium 616. In a basic implementation, the CRSM may include random access memory ("RAM") and flash memory. In other implementations, the CRSM may include, but is not limited to, read only memory ("ROM"), electrically erasable programmable read only memory ("EEPROM"), or any other tangible medium that can be used to store the desired information and that can be accessed by the processor 614.
A number of modules, such as instructions, data stores, etc., may be stored within the computer-readable medium 616 and configured to execute on the processor 614. Some exemplary functional modules are shown stored in the computer-readable medium 616 and executed on the processor 614, but the same functions may alternatively be implemented in hardware, firmware, or a system on a chip (SOC).
Operating system module 618 can be configured to manage hardware within and coupled to handheld device 602 for facilitating other modules. Further, computer-readable media 616 may store a network communication module 620 that enables handheld device 602 to communicate with one or more other devices, such as client machine 104 (e.g., a PC) executing an application (e.g., a gaming application), a gaming machine, remote computing system 106, etc., via communication interface 612. The computer-readable media 616 may also include a game session database 622 to store data associated with games (or other applications) executed on the handheld device 602 or on the client machine 104 connected to the handheld device 602. The computer-readable media 616 may also include a device record database 624 that stores data associated with devices to which the handheld device 602 is coupled, such as the client machine 104 (e.g., PC, game console, etc.), the remote computing system 106, etc. The computer-readable media 616 may also store game control instructions 626 that configure the handheld device 602 to function as a game controller, and general control instructions 628 that configure the handheld device 602 to function as a controller for other non-game devices.
The handheld device 602 is also shown to include one or more sensors 610. For example, the sensors 610 may include motion sensors, such as an Inertial Measurement Unit (IMU) that may include one or more gyroscopes and/or accelerometers and/or magnetometers and/or compasses, or any other suitable motion sensor. In some embodiments, the sensor 610 may be implemented as a standalone gyroscope, accelerometer, magnetometer, compass, or the like, and is not necessarily implemented as an IMU. In some embodiments, one or more of these sensors may be used to provide six-component motion sensing. For example, the IMU may be configured to sense and generate sensor data 604 indicative of translational and/or rotational movement around the 3D space. The sensor data 604 generated by such sensors 610 may be related to the range, rate, and/or acceleration of translational movement (X, Y and Z movement) in 3D space, as well as the range, rate, and/or acceleration of rotational movement (roll, pitch, and yaw) in 3D space. The measurements may be generated from a 3D coordinate system, such as a cartesian (X, Y and Z) or spherical coordinate system. The sensor data 604 may include measurements in terms of displacement (e.g., displacement since a previous time recording), velocity, and/or acceleration (represented by variables d, v, a and θ, ω, α, respectively), such as translational and angular movement. Sensor data 604 may also include the number of times sensor data 604 is generated and/or transmitted (e.g., over any suitable time interval) such that a history of sensor data 604 may be collected and stored temporarily or permanently on handheld device 602.
As another example, the sensor 610 may include a touch sensor configured to sense proximity of an object (such as a finger, palm, etc.) to the touch sensor, which may be based on any suitable touch sensing technology, such as a capacitive touch sensor, a resistive touch sensor, an infrared touch sensor, a touch sensor that utilizes acoustic waves to detect proximity of the finger 102, or any other type of touch sensor. For example, a touch sensor may be provided below or on the surface of the device and/or within or on the finger-operated control in order to detect finger proximity to the surface or to the finger-operated control. In response to detecting proximity (e.g., a finger contacting or hovering above a surface), the touch sensor may generate sensor data 604 indicative of the proximity of the finger. Such as a touch sensor, may be embedded in the handle of the handheld device 602 to detect the grip of the user 102 and/or embedded within various controls including a trackpad, joystick, buttons, and the like. In implementations utilizing capacitance-based sensing, the touch sensor may include electrodes (e.g., transmitter and receiver electrodes across a capacitive sensor) and voltages may be applied to these electrodes such that these electrodes are configured to measure capacitance changes at these electrodes, which may be converted into sensor data 604 in the form of capacitance values that indicate the proximity of an object to the sensor 610. For example, a change in capacitance at an electrode of a capacitance-based touch sensor may be affected by an object (such as a finger) that is proximate to the electrode. The raw capacitance may be digitized into a proximity value to generate sensor data 604.
As another example, the sensor 610 may include a pressure sensor, such as a Force Sensing Resistor (FSR). For example, the FSR may include a conductive material (e.g., a semiconductor material, such as an ink composition) spaced apart from a resistive film, and an actuator configured to transmit a force onto the resistive film such that the resistive material contacts the conductive material under a compressive force applied to the actuator. The FSR may exhibit a varying resistance in response to the variable force to generate sensor data 604 corresponding to the resistance value. The FSR may be a "shunt mode" FSR or a "pass through mode" FSR. In the case of shunt mode FSR, the conductive material spaced apart from the resistive film may be a plurality of interdigitated metal fingers. When a force is applied to the actuator of the FSR, the resistive film contacts some of the interdigitated metal fingers, which shunts the metal fingers, changing the resistance on the output terminal of the FSR, which can be digitized to the FSR value to generate sensor data 604. In some embodiments, the pressure sensor may additionally or alternatively include other types of pressure sensing mechanisms, such as piezoelectric sensors, strain gauges, and the like.
Other examples of sensors 610 configured to generate corresponding sensor data 604 may include, but are not limited to: temperature sensors, humidity sensors, cameras, etc. Regardless of the type of sensor 610 of the handheld device 602, the sensor 610 is configured to generate sensor data 604 relating to the physical state of the handheld device 602 in some manner. For example, a touch sensor can generate sensor data 604 that indicates whether an object (e.g., a finger) is contacting or in proximity to a portion of the device 602 that includes the touch sensor (e.g., a directional key (D-pad), a joystick, a trigger button, a bumper button, a selector button, etc.). As another example, a pressure sensor may generate sensor data 604 indicating whether an object is pressing on a portion of the device 602 and whether the portion of the device 602 is being pressed lightly or heavily by the object. As yet another example, a motion sensor (e.g., a gyroscope and/or an accelerometer) may generate sensor data 604 indicating whether the orientation and/or spatial position of the handheld device 602 within the 3D space has changed and/or whether the handheld device 602 is moving quickly or slowly. Accordingly, the sensors 610 of the handheld device 602 are configured to generate sensor data 604 indicative of these and other types of physical states of the device 602.
Fig. 6 shows that the handheld device 602 may also include encryption hardware 630. For example, a service provider of video game services and/or a third party manufacturer may manufacture a "trusted" handheld device 602 that is manufactured at the time of manufacture with tamper-resistant hardware components embedded therein. The embedded hardware components may include the encryption hardware 630 depicted in fig. 6, which in turn may store the private key 632. For example, private key 632 may be present in non-volatile memory of handheld device 602.
In some embodiments, the cryptographic hardware 630 is tamper-resistant. For example, the encryption hardware 630 may be included or enclosed within a tamper resistant enclosure, and the encryption hardware 630 may be configured to disable itself if the tamper resistant enclosure is damaged. For example, the encryption hardware 630 may be configured to erase or delete the private key 632 and/or send an alert to the remote computing system 106 if the tamper resistant enclosure is damaged or otherwise tampered with. This makes it difficult for the user to illegally gain access to the private key 632, or at least to go undetected, which prevents sharing of the private key 632 with others and the possibility of using the private key 632 to act as a "commission (smurf)" through another user account 120. In an online game, "rehearsal" is the act of a player of a certain skill level playing a game with another person's account or secondary account to appear to be of a lower level or skill level.
In some embodiments, the user 102 may initially associate or register the handheld device 602 with his/her user account 120 (and possibly with additional user accounts 120, such as family member accounts). This may be accomplished by performing an initial registration procedure for the handheld device 602. The initial registration procedure may include a first activation procedure that uses, for example, a two factor authentication (2FA) step for increased security. For example, the user 102 may pair the handheld device 602 with the client machine 104 in the vicinity of the handheld device 602 using, for example, a short-range wireless protocol (such as bluetooth). The user 102 may log into his/her user account 120 using a client application (e.g., a video game client) executing on the client machine 104 or on a mobile device (e.g., a mobile phone) of the user 102. In response to the user 102 logging in, the remote computing system 602 may receive an indication of the paired but unregistered/inactive device 602 and may send an activation code to the user 102, such as by sending a message (such as text) to a mobile number or sending an email to an email address specified in the user account 120. The user 102 may access the message to obtain the activation code and may enter the activation code via a video game client executing on the user's phone or on the client machine 104 (e.g., by entering the activation code using the handheld device 602). If the remote system 106 receives the correct activation code within the specified time limit, the handheld device 602 may be associated with the user account 120 of the user 102, such as by indicating in the user account 120 that the handheld device 602 is a registered device of the user account 120, the time and date of registration, the location where the handheld device 602 was activated, and so forth. The remote computing system 106 may also associate the user account 120 with the private key 632 of the handheld device 602 and/or with biometric data (e.g., a fingerprint, eye print, face print, voice print, etc.) of the user 102 obtained through an input device of the handheld device 602 as a method of authenticating the user 102 in the future, such as when the user 102 is logged into his/her user account 120 in the future and is using the handheld device 602 registered with the user account 120.
In some embodiments, an additional layer of security may be implemented in order to make the handheld device 602 more secure to switch from one user account 120 to another user account 120, which may help prevent illegal use of the trusted handheld device 602 in the event that the device 602 is stolen or lost. For example, the remote computing system 106 may enforce a wait period (e.g., a wait period of several days) since the device 602 was associated/registered with the first user account 120 before the device 602 is allowed to switch to the second user account 120 (e.g., by disassociating/deregistering the device 602 from the first user account 120 and associating/registering the device 602 with the second user account 120). The remote computing system 106 may additionally or alternatively limit the number of times the user 102 is allowed to switch the handheld device 602 between user accounts 120 within a given time period (e.g., the user 102 may be allowed to switch the handheld device 602 from one user account 120 to another user account 120 no more than twice within an hour). These additional layers of security may help to contain unauthorized use of the trusted handheld device 602.
Once handheld device 602 is registered with user account 120, user 102 may use device 602 to interact with the video game platform, such as by connecting handheld device 602 to their client machine 104 (e.g., pairing device 602 with client machine 104 over a wireless communication link) (which may occur automatically after a first pairing process using device record database 624), and using device 602 as a handheld game controller to play video game 110. In this scenario, the client machine 104 may be used to display images of the video game 110 as the user 102 plays the video game 110 by operating the handheld device 602. At any suitable time during interaction with the video game platform, the handheld device 602 may obtain biometric data (e.g., a fingerprint, an eye print, a face print, a voice print, etc.) of the user 102 via an input device of the handheld device 602 and may send the obtained biometric data to the remote computing system 106 via the communication interface 612 (e.g., by routing the biometric data through the client machine 104 to the remote computing system 106), and the biometric data may be used by the remote computing system 106 to authenticate the user account 120, authenticate the user 102, and/or authenticate the handheld device 602. Further, at any suitable time during interaction with the video game platform, private key 632 may be used by logic of handheld device 602 to encrypt device output (such as game control data 606 and/or sensor data 604 transmitted by handheld device 602 during use of device 602). For example, the private key 632 may be used to tag the game control data 606 and/or the sensor data 604 to convert the data into a ciphertext form using any suitable encryption algorithm, and the device 602 may send the encrypted data to the remote computing system 106 via the communication interface 612 (e.g., by routing the encrypted data through the client machine 104 to the remote computing system 106). The remote computing system 106 may maintain a repository of private keys for trusted handheld devices in the data store 116 (or database 116), which may include a copy of the private key 632 that is used by the handheld device 602 to encrypt data sent by the device to the remote computing system 106. The remote computing system 106 may have recorded the private key in the data store 116 at the time of manufacture of the trusted handheld device or thereafter. Using the private key accessible to the remote computing system 106, the system 106 may decrypt the encrypted data it received from the handheld device 602 to verify that the received data was encrypted with the expected private key 632. Thus, the remote system 106 may use a copy of the private key 632 for authenticating the user account 120, authenticating the user 102 and/or authenticating the handheld device 602, and other possible uses of the private key 632, knowing that the private key 632 is used by the handheld device 602 to encrypt device output. In some embodiments, the remote computing system 106, in response to receiving data associated with the logged-in user account 120, may request the user 102 to enter a code (e.g., a Personal Identification Number (PIN), password, passphrase, etc.) to authenticate the user 102 upon each login of the user account 120. Such a request may be sent via the 2FA device of the user 102 (e.g., via a mobile phone, via the client machine 104, etc.), and a response to the request may be received by the remote system 106 from the same 2FA device of the user 102. In some embodiments, no additional user action is required in order to use the already registered handheld device 602 associated with the corresponding logged in user account 120. That is, using the handheld device 602 associated with the user account 120 with which the device 602 is registered requires that the device 602 encrypt data using its private key 632 and send the encrypted data to the remote computing system 106, and that the remote computing system 106 decrypt the data to verify that the received data is encrypted with the expected private key 632. For example, remote system 106 may send a block of data to handheld device 602, which may encrypt the block of data and return the encrypted data to remote system 106, and remote system 106 may then verify that the encryption was performed correctly, all without transmission key 632 itself.
For example, the data store 116 (or database 116) accessible to the remote computing system 106 may maintain biometric data and/or a plurality of private keys that have been associated with a plurality of trusted handheld devices, such as handheld device 602, and the remote computing system 106 may compare the biometric data and/or the private keys, such as private key 632, used to encrypt data received from the client machine 104 and/or handheld device to determine whether the received biometric data and/or private key matches any biometric data and/or private key maintained in the database 116. These operations may be performed for authentication purposes and/or to calculate a trust score, as described herein. In other words, receipt of biometric data that matches the biometric data maintained in the database 116, and/or encrypted data encrypted with the private key 632 that matches the private key maintained in the database, may be used to authenticate the user 102, the device 602, and/or the user account 120 associated therewith, and/or receipt of matching biometric data and/or encrypted data encrypted with the matching private key 632 may be used to generate a trust score for the associated user account 120, which may be used for player matching. In this sense, the user account 120 associated with the handheld device 602 (which provides the matching biometric data and/or encrypted data encrypted using the matching private key 632 to the remote computing system 106) may be seamlessly authenticated (e.g., without additional user action in some cases) and more likely to match the user account 120 of other human players than the "untrusted" user account 120 that may be associated with a non-human player (e.g., software designed for cheating).
Thus, when the user 102 interacts with the video game platform using the handheld device 602 (e.g., to play one or more video games 110 on their respective client machines 104), the sensor data 604 and game control data 606 may be sent to the client machines 104 and forwarded from the client machines 104 to the remote computing system 106. Game control data 606 may be used to control aspects of video game 110 and, thus, be processed by video game 110 to determine how to render the next frame of video game 110. The sensor data 604 represents raw, unfiltered sensor data 604, such as raw data generated by a gyroscope, raw data generated by an accelerometer, and/or raw data generated by a touch sensor (e.g., a capacitive touchpad). In a streaming implementation, the video game 110 may be executed on the remote computing system 106, and the remote computing system 106 may capture the video game 110 data and may send the video game 110 data to the client machine 104 of the user 102 over the network 108. This may involve capturing the state of the video game 110, encoding the video and audio data into bits, transmitting the encoded bits over the computer network 108 to the remote computing system 106 of the client machine 104, and an application executing on the client machine 104 (e.g., a video game client) may decode the bits to output an image of a given frame via a display and output audio of the given frame via a speaker of the client machine 104 (or via a headset connected to the client machine). The user 102 may react to the video he/she is viewing and the audio he/she is listening to by operating the handheld device 602. For example, the user 102 may actuate a control of the handheld device (e.g., press a directional key (D-pad), deflect a joystick, swipe a finger over a trackpad, etc.) and/or may tilt or move the handheld device 602 in 3D space to control an aspect of the video game 110. In response to operation of the handheld device 602, the handheld device 602 may generate game control data 606 that is transmitted to the remote computing system 106 directly via the wireless access point and over the computer network 108, or via the client machine 104 associated with the handheld device 602. In either case, the game control data 606 may be transmitted to the remote computing system 106 in real-time to control aspects of the video game 110. For example, the game control data 606 may be processed by the video game 110 to control movement of virtual objects (e.g., player-controlled characters) of the video game 110 by moving the virtual objects within a virtual world represented by a scene on a display of the client machine 104.
The game control data 606 may include at least some sensor data 604 generated by sensors 610 of the device 602 (such as filtered/attenuated sensor data 604 and/or amplified sensor data 604), the sensor data 604 sent by the device 602 to the remote computing system 106 (as depicted in fig. 6) representing raw, unfiltered sensor data 604 that is not used to control aspects of the video game 110, but is used to generate a machine-learned trust score for the associated user account 120.
At step 1 in fig. 6, the computing system 106 may train the machine learning model 216 using at least the historical sensor data 604 sampled from the data store 116. For example, the computing system 106 may access a portion of the historical sensor data 604 associated with a sampled set of user accounts 120 registered with the video game service, and may train the machine learning model 216 using the sampled sensor data 604. As described herein, as the user 102 uses their handheld device 602 to interact with the video game platform, the historical sensor data 604 may have been received from the client machine 104 along with the historical game control data 606. In some embodiments, the portion of the sensor data 604 used as training data is represented by a set of features, and when a human player holds and operates the handheld device 602, each user account 120 of the sample set is tagged with a label that indicates whether historical game control data 606 associated with the corresponding user account 120 was generated by the handheld device 602, rather than having been synthesized and/or modified using software. In other words, user accounts 120 determined to be associated with game control data 606 that is artificially generated by a human user 102 operating the handheld device 602 may be so tagged while other user accounts 120 determined to be associated with synthesized and/or modified game control data (i.e., data synthesized and/or modified by software to control aspects of a video game) may be so tagged. In this manner, supervised learning methods may be employed to train machine learning model 216 to predict players who may be legitimate human players operating handheld device 602, rather than non-human software designed for cheating.
For example, when a human is operating the handheld device 602, there may be subtle movements of the handheld device 602 that are unintentional movements that may be detected by one or more physical sensors 610 of the handheld device 602 and, thus, may be manifested in the sensor data 604. Based on the notion that such sensor data 604 will exhibit these nuances, the machine learning model 216 may be trained using the sensor data 604 as training data to distinguish human and non-human players by identifying these nuances (e.g., movement, finger proximity, etc.) in the sensor data 604 that were generated by the physical sensors 610 of the real, physical handheld device 602 because the handheld device 602 has been mechanically operated (e.g., by the human user 102). Conversely, if sensor data 604 is received but it does not exhibit the expected nuances that result from human operation of the handheld device 602, it may be inferred that the corresponding game control data 606 was not generated as a result of operation of the real, tangible handheld device 602, but was synthesized and/or modified using software. For example, the cheating player may employ third-party software, or develop their own software, that is configured to automatically manipulate the actions of the cheating player's avatar (e.g., by programmatically causing a mouse cursor to move to a target and fire a weapon in an automatic manner). This so-called "auto-targeting robot" software effectively plays the video game 110 in place of the human player so that the human player is not actually playing the video game 110 and the software is synthesizing and/or modifying game control data 606 processed by the video game 110 in the same manner as real game control data 606 is processed, thereby making the cheating player more advantageous than a legitimate human player who is not cheating. If such synthetic and/or modified game control data 606 is not detected, computerized algorithms may become ubiquitous and widely used to enhance the performance of cheating players in multiplayer games, which may disrupt the experience of human players who wish to legally play video game 110 and human players who may match these computerized algorithms. In these cases, attempting to mimic sensor data 604 generated by human operation of handheld device 602 may be detected by machine learning model 216, which is trained to distinguish human players from non-human players (such as automatically aiming a robot), and may remedy these and other kinds of cheating behaviors, such as by isolating non-human players in their own match of a multiplayer video game.
It should be appreciated that additional data may be used in addition to, or instead of, the historical sensor data 604 described above to train the machine learning model 216. For example, historical game control data 606 may be collected over time from the client machine 104, the game control data 606 having been generated based on user input provided to the handheld device by a human user in order to control aspects of the video game 110. These user inputs may include actuations of controls on the handheld device 602 (such as presses of buttons, deflections of a joystick, swipes of a finger on a trackpad, etc.) and/or tilting or movement of the handheld device 602 in 3D space. Accordingly, historical game control data 606 (e.g., data indicative of user input provided to the handheld device 602, such as to control aspects of the video game 110) may be used as training data along with or in place of the historical sensor data 604. Using both historical sensor data 604 and historical game control data 606 as training data allows correlations to be formed between game control data 606 and concurrently generated sensor data 604. As yet another example, wireless communication data may be collected from the client machine 104 over time, and historical wireless communication data may be used as training data along with or in lieu of the historical sensor data 604. The wireless communication data may include, but is not limited to: a wireless (e.g., WiFi, bluetooth, etc.) signal strength value, data indicative of instances of dropped data packets sent from the handheld device 602 to the associated client machine 104, data indicative of a number of dropped data packets sent from the handheld device 602 to the associated client machine 104, etc. The wireless communication data may be obtained by the handheld device 602, by the client machine 104, and/or any other suitable device local to the environment of the user 102.
As described above, the training data may include two components: features and labels. However, in some embodiments, the training data used to train the machine learning model 216 may be unlabeled. Accordingly, machine learning model 216 may be trained using any suitable learning technique, such as supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and the like. The features included in the training data may be represented by a set of features, such as in the form of an n-dimensional feature vector of quantifiable information about attributes of the training data. Exemplary features included in the training data may include, but are not limited to: sensor data 604 values (e.g., capacitance values, resistance values, displacement values, velocity values, acceleration values, temperature values, humidity values, etc.), game control data 606 values (e.g., values generated by potentiometers and other switches of finger-operated controls), biometric data values, private key 632 values associated with handheld device 602 encrypting data sent to remote computing system 106 using private key 632, including the amount of time private key 632 (and thus trusted handheld device 602) has been associated with/registered with user account 120, the number of times trusted handheld device 602 (and/or its private key 632) has been used by user account 120, wireless communication data values (e.g., wireless signal strength values, values indicating instances of dropped data packets sent from handheld device 602 to associated client machine 104, A value indicating the number or frequency of dropped data packets sent from the handheld device 602 to the associated client machine 104, etc.), the number of times the trusted handheld device 602 (and/or its private key 632) has been switched between different user accounts 120), the amount of time a player typically spends playing a video game 110, the amount of time a player spends playing a particular video game 110, the number of times a player logs in and plays a video game 110 in a day, the player's tournament history data (e.g., total score (per tournament, per round, etc.), top rate, number of hits, death number of aids, player rankings, etc.), the number and/or frequency of player cheating reports, the number and/or frequency of player cheating being innovated, the number and/or frequency of player cheating being critiquest, a confidence value (score) output by a machine learning model in which cheating players were detected during a video game, etc, The number of multiple user accounts 120 associated with a single player (which may be inferred from common addresses, telephone numbers, payment instruments, etc. bound to the multiple user accounts 120), how long the user accounts 120 have been registered with the video game service, the number of previously prohibited user accounts 120 bound to the player, the number and/or frequency of monetary transactions by the player on the video game platform, the amount per transaction, the number of digital products having monetary value associated with the player's user account 120, the number of times the user account 120 has been bartered (e.g., transferred between different owners/players), the frequency with which the user accounts 120 are transferred between players, the geographic location from which the player has logged into the video game service, the number of different payment instruments, telephone numbers, mailing addresses, etc. that have been associated with the user accounts 120, and/or how often these things change, and/or any other suitable feature that may be relevant to calculating a trust score 118 that indicates a probability that game control data 602 associated with a user account was generated by a handheld device held and operated by a human player rather than having been synthesized and/or modified using software.
As part of the training at step 1 in fig. 6, training component 210 may set weights for machine learning. These weights may be applied to a set of features included in the training data, as derived from historical data (including historical sensor data 604) in data store 116. In some embodiments, the weights set during the training process may be applied to parameters inside the machine learning model (e.g., weights of neurons in hidden layers of the neural network). These internal parameters of the machine learning model may or may not be mapped one-to-one with each input feature in the set of features. These weights may indicate the impact of any given feature or parameter on the score 118 output by the trained machine learning model 216.
At step 2 of fig. 6, the computing system 106 may score one or more registered user accounts 120 using the trained machine learning model 216. For example, the computing system 106 may receive new sensor data 604 and new game control data 606 associated with one or more logged-in, registered user accounts 120 from one or more client machines 104. In an illustrative example, the computing system 106 may receive information 122 indicating that the logged-in user account 120 is currently playing or has launched the video game 110. The video game 110 may be executed on the client machine 104 via an installed client application or the video game 110 may be streamed to the client machine 104 as it is executed on the remote computing system 106. In any case, when a player 102 logs in with their user account 120 and executes a particular video game 110 requesting play in a multiplayer mode, their respective client machine 104 may provide information 122 to the computing system 106 indicating as much content as possible, and when the video game 110 is launched and the player begins the game, the sensor data 604 and game control data 606 may be streamed from the client machine 104 to the remote computing system 106 in real time. Computing system 106 may provide the new sensor data 604 it receives as input to trained machine learning model 216 and may generate scores 118 as output of trained machine learning model 216, the scores being associated with respective user accounts 120 that have logged in and are associated with new game control data 606. In this example, the score 118 may be related to a probability that new game control data 606 has been generated by the handheld device 602 associated with the client machine. In this case, a high trust score indicates a trusted user account 120 (e.g., a user account that may be associated with a human player operating the handheld device 602), while a low trust score indicates an untrusted user account 120 (e.g., a user account that may be associated with a non-human player, such as software, generating synthetic and/or modified game control data 606 that appears similar to actual game control data 606 generated by an actual handheld device 602). In some embodiments, the score 118 is a variable normalized over a range of [0, 1 ]. The trust score 118 may have a monotonic relationship with the probability that new game control data 606 has been generated by the handheld device 602. The relationship between the score 118 and the actual probability, although monotonic, may or may not be linear. Of course, the scoring may be implemented in any suitable manner to predict whether new game control data 606 was generated by handheld device 602 or synthesized and/or modified using software.
In some embodiments, the new game control data 606 itself may be provided as additional input to the trained machine learning model 216 to generate the score 118 based at least in part on the game control data 606 provided as additional input to the trained machine learning model 216. For example, the user 102 may operate the handheld device 602 by actuating controls of the handheld device 602 (e.g., pressing a button such as a directional key (D-pad), deflecting a joystick, sliding a finger over a touchpad, etc.) during the game, and/or control aspects of the video game 110 by tilting or moving the handheld device 602 in 3D space. In response to user manipulation of the handheld device 602, the handheld device 602 may generate game control data 606 that is transmitted to the remote computing system 106. This new game control data 606 (which indicates the user input provided to the handheld device 602) may be provided as input to the trained machine learning model 216. It should be appreciated that in addition to providing sensor data 604 as input to the trained machine learning model 216, new game control data 606 may also be provided as input to the trained machine learning model 216. An exemplary reason for providing game control data 606 as input to the trained machine learning model 216 is that the person providing user input to the handheld device 602 may be "louder" or not as profitable as a computer program that simulates the same type of user input. Accordingly, the trained machine learning model 216 may be trained to distinguish between human users and non-human players (e.g., software designed for cheating) based at least in part on game control data 606 received from the client machine 104 and provided as input to the trained machine learning model 216.
In some embodiments, the new wireless communication data may be provided as additional input to the trained machine learning model 216 to generate the score 118 based at least in part on the wireless communication data provided as additional input to the trained machine learning model 216. For example, a wireless (e.g., WiFi, bluetooth, etc.) signal strength may be determined relative to the handheld device 602 relative to another device with which it communicates, such as the client machine 104, and the signal strength may vary (e.g., increase or decrease) with interference of the wireless signal in the environment and/or the distance the handheld device 602 is spaced from another device with which it communicates. The presence of wireless signal strength values and/or wireless signal strength over time (e.g., observing changing signal strength values) may indicate that a human player is using the handheld device 602, as these characteristics are typically encountered in gaming system settings. In contrast, an absence of a signal strength value or a constant signal strength value that does not change over time may be indicative of a non-human player (e.g., software designed for cheating). As another example, wireless communication data indicative of instances of dropped data packets sent from the handheld device 602 to the associated client machine 104 and/or the number of dropped data packets sent from the handheld device 602 to the associated client machine 104 may be determined and used as input to the trained machine learning model 216. For example, the presence of one or more dropped packets and/or the number of dropped data packets over time may indicate that a human player is using the handheld device 602. In contrast, if there are no dropped packets in the wireless communication data and/or if the number of dropped data packets remains constant (e.g., does not change) over time, this may be indicative of a non-human player (e.g., software designed for cheating).
In some embodiments, determining the score 118 is further based on determining that the biometric data received from the client machine 104 matches the biometric data maintained in the database 116 and/or that the private key 632 used to encrypt the data received from the client machine 104 matches one of a plurality of private keys maintained in the database, the biometric data and private keys maintained in the database being associated with the plurality of trusted handheld devices 602. The biometric data and/or the private key 632 may be used as a trusted agent to prove to the remote computing system 106 that the authentic handheld controller 602 trusted by the system 106 is sending the game control data 606.
At step 3 in fig. 6, utilizing the machine learning scores 118 determined for the logged-in, registered user accounts 120 based on the new sensor data 604 received from the corresponding client machine 104, the computing system 106 may match the players 102 together in a multi-player match for playing the video game 110 in a multi-player mode and by providing a match assignment 124 to the client machine 104, so as to assign a subset of the logged-in user accounts 120 to different ones of the multiple matches based at least in part on the machine learning scores 118 determined for the logged-in user accounts 120. In this manner, if the trained machine learning model 216 assigns a low (e.g., below a threshold) score 118 to a first subset of the logged user accounts 120 and a high (e.g., above a threshold) score 118 to a second subset of the logged user accounts 120, the first subset of the logged user accounts 120 may be assigned to a first match of the plurality of matches and the second subset of the logged user accounts 120 may be assigned to a different second match of the plurality of matches. This may include assigning the user account 120 to a match at the beginning of the video game 110 (e.g., before initiating a match) and/or reassigning the user account 120 to a new match mid-stream in the video game 110 (e.g., after initiating a match). For example, if a first logged-in user account 120 is initially assigned to a first match specified for a human player 102, and then, based on real-time data provided as input to the trained machine learning model 216, a machine learning score 118 is generated for that user account 120, the score 118 indicating that the first user account 120 may be associated with a non-human player (e.g., software) that is synthesizing and/or modifying game control data 606, then the computing system 106 may reassign the first user account to a second match specified for the non-human player 102 (e.g., a cheating player that is using software to synthesize and/or modify game control data). In some embodiments, when the client machine 104 sends game control data 606 and does not send raw, unfiltered sensor data 604 with the game control data 606, the user account 120 associated therewith may be considered untrusted based on the absence of the raw sensor data 604. In other embodiments, game control data 606 may be provided as input to the trained machine learning model 216 to generate a score 118 for a user account, even in the absence of raw sensor data 604, or a machine learning score 118 may be generated for such a user account 120 in any other suitable manner described herein to determine a propensity for cheating by a player associated with the user account 120.
At step 4 in fig. 6, the computing system 106 may cause execution of the video game 110 in the assigned matches for each user account 120. In a streaming implementation, this may involve capturing the state of the video game 110, executing on the remote system 106, encoding the video and audio data into bits, transmitting the encoded bits over the computer network 108 to the remote computing system 106 of the client machine 104, and an application executing on the client machine 104 (e.g., a video game client) may decode the bits to output an image of a given frame via a display and output audio of the given frame via a speaker of the client machine 104 (or via a headset connected to the client machine). It should be understood that in some embodiments, the computer network 108 of fig. 1 may represent a Local Area Network (LAN), which may be the case in a "home" video game streaming scenario. In such a scenario, the computing system 106 may represent a host computing system located in or near the same geographic location of the client machine 104. In a Wide Area Network (WAN) or LAN implementation, the client machines 104 may be implemented as thin clients configured to send and receive data to and from the computing system 106 with minimal data processing on the client machines 104 themselves (e.g., without having to execute the video game 110 on the client machines 104 themselves). In a thin-client implementation, game control data 606 and sensor data 604 may be received to the computing system 106, and the computing system 106 may send frames of image data and audio data to the client machine 104 for presentation on a display associated with the client machine 104. In implementations where a client application on each machine 104 is executing the video game 110, at step 4, information may be sent to each client application executing on the client machine 104 to cause execution of the video game 110 on the client machine 104 in the assigned match for the logged in user account 120 in question. Where players are grouped into matches based at least in part on the machine-learned scores 118, in-game experience of at least some of the groups of players 102 may be improved because the system may group players (including non-human players (e.g., software designed for cheating)) that are expected to perform poorly (e.g., by cheating) into the same match, and by doing so may keep the misbehaving players isolated from other players who wish to legally play the video game 110.
Fig. 7 is a flow diagram of an exemplary process 700 for training a machine learning model 216 that uses data, including sensor data 604, to predict the probability that game control data 606 received from a client machine 104 has been generated by a handheld device 602. For discussion purposes, the process 700 is described with reference to the previous figures.
At 702, the computing system 106 may provide the user 102 with access to a video game service. For example, the computing system 106 may allow a user to access and browse a catalog of the video game 110, modify a user profile, conduct transactions, participate in social media activities, and other similar actions. The computing system 106 may distribute the video game 110 (and its content) to the client machine 104 as part of a video game service. In an illustrative example, a user 102 having access to a video game service may load an installed client application, log in using a registered user account, select a desired video game 110, and execute the video game 110 on his/her client machine 104 via the client application by streaming the video game 110, downloading the video game 110, and executing the game locally on the client machine 104 or any other suitable game distribution/execution platform.
At 704, the computing system 106 may collect and store data associated with the user account 120 registered with the video game service. This data may include at least sensor data 604 as described herein, but may also include data 114 and game control data 606 as described herein. This data may be collected at block 704 at any suitable time, such as when the user 102 accesses a video game service with their registered user account 120 and uses the video game platform, such as to play a video game 110 thereon. Because the data is generated, received, and/or responded to (e.g., when the user account 120 exits the video game, when network bandwidth is sufficient to transmit the data, etc.) at defined intervals (e.g., by receiving real-time streaming data from the client machine 104), the data generated locally at the client machine 104 and/or by the handheld device 602 can be collected in real-time. Over time, it may be recognized that a large collection of data associated with registered user accounts 120 is available to computing system 106.
At 706, the computing system 106 may access (historical) data including (historical) sensor data 604 associated with a sampling set of user accounts 120 registered with the video game service via the training component 210. Upon receiving the game control data 606, (historical) sensor data 604 may have been received from the client machine 104 along with (historical) game control data 606 processed by the video game 110, such as to control aspects of the video game 110. Thus, (historical) sensor data 604 may be associated with corresponding (historical) game control data 606 that is generated at the same time. In an example, when the user 102 operates those handheld devices 602 to play the video game 110 on a video game platform, the sensor data 604 accessed at block 706 may include raw sensor data 604 generated by one or more physical sensors 610 of the handheld devices 602.
At 708, when the human player holds and operates the device 602 to generate (historical) game control data 606 associated with the user accounts 120, the computing system 106 can tag, via the training component 210, each user account 120 of the sampling set of user accounts 120 with a tag indicating whether the (historical) game control data 606 associated with the user account 120 was generated by the handheld device 602.
At 710, computing system 106 may train machine learning model 216 via training component 210 using (historical) sensor data 604 and possibly additional data as described herein (e.g., (historical) game control data 606, (historical) wireless communication data, etc.) as training data to obtain trained machine learning model 216. As shown in the sub-box 712, training of the machine learning model 216 at block 710 may include setting weights for machine learning. These weights may be applied to a set of features derived from historical sensor data 604. In some embodiments, the weights set at block 712 may be applied to parameters internal to the machine learning model (e.g., weights of neurons in hidden layers of the neural network). These internal parameters of the machine learning model 216 may or may not be mapped one-to-one with each input feature in the set of features. As indicated by the arrow from block 710 to block 704, the machine learning model 216 may be retrained using updated (historical) data, including updated (historical) sensor data 604, to obtain a new trained machine learning model 216 that is adapted to recent player behavior. This allows the machine learning model 216 to automatically adapt to changing player behavior over time.
FIG. 8 is a flow diagram of an exemplary process 800 for utilizing a trained machine learning model 216 to determine trust scores for user accounts that are related to the probability that game control data 606 has been generated by a handheld device 602. For discussion purposes, the process 800 is described with reference to the previous figures. Further, as shown by the page external reference "C" in fig. 7 and 8, process 800 may continue from block 710 of process 700.
At 802, the computing system 106 may receive game control data 606 and sensor data 604 from the client machine 104. The game control data 606 and the sensor data 604 may be associated with the user account 120 logged into and registered with the video game service. Game control data 606 may be data to be processed by video game 110 for controlling aspects of video game 110. The sensor data 604 received at block 802 may include, but is not limited to: capacitance, resistance, displacement, velocity, acceleration, temperature, humidity, etc. In some embodiments, these values represent raw (e.g., unfiltered, unamplified) values, assuming that these values were generated by the actual physical sensors 610 of the handheld device 602. Further, some or all of the data received at block 802 may be encrypted (e.g., the device 602 may have encrypted the data using the private key 632 before sending the data to the remote system 106).
At 804, the computing system 106 may also receive biometric data from the client machine 104. For example, an input device (e.g., a touch sensor, a camera, a microphone, etc.) of the handheld device 602 may obtain biometric data from a user of the handheld device 602, and the handheld device 602 may transmit the obtained biometric data to the client machine 104, and the client machine 104 may forward the biometric data to the remote computing system 106 over the computer network 108. The biometric data may also be received as encrypted data (e.g., the device 602 may encrypt the biometric data using the private key 632 before sending the data to the remote system 106).
At 806, the computing system 106 may evaluate the private key 632 and/or the biometric data. As shown in sub-boxes 808, 810, and 812, the evaluation may include various operations. For example, the remote system 106 may decrypt any encrypted data received at blocks 802 and/or 804 to verify that the expected private key 632 has been used to encrypt the received data. At the sub-box 808, the computing system 106 may determine that the biometric data and/or the private key 632 are associated with the user account 120 associated with the game control data 606 and the sensor data 604 received at block 802. This may be determined by comparing the biometric data to the biometric data already associated with the user account 120 and/or by determining that the encrypted data was in fact encrypted using the private key 632 already associated with the user account 120.
At sub-box 810, computing system 106 may determine that the biometric data matches biometric data maintained in database 116 and/or that private key 632 matches one of the plurality of private keys maintained in database 116. The biometric data and/or private keys maintained in the database 116 may be associated with multiple trusted handheld devices 602 such that if the private key 632 derived from the encrypted data matches one of the private keys in the database 116 and/or if the received biometric data matches any of the biometric data in the database 116, the computing system 106 may use the matching data as a trusted agent, knowing that this was sent from the trusted handheld device 602.
At sub-box 812, computing system 106 may determine data associated with biometric data and/or private key 632. For example, the computing system 106 may determine a length of time that the biometric data and/or the private key 632 have been associated with the user account 120. The computing system 106 may additionally or alternatively determine a number of times the biometric data and/or the private key 632 have been used with the user account 120 or to log into the user account. Computing system 106 may additionally or alternatively determine a number of times private key 632 has been switched between different user accounts 120 (such as by disassociating/deregistering from first user account 120 and subsequently associating/registering with second user account 120).
At 814, the computing system 106 may provide the data as input to the trained machine learning model 216 via the scoring component 212. As shown in sub-boxes 816, 818, and 820, in some embodiments, this may involve providing various types of data as input to the trained machine learning model 216 at block 806. For example, at the sub-box 816, the sensor data 604 received at the box 802 may be provided as input to the trained machine learning model 216. At sub-box 818, the game control data 606 received at block 802 may be provided as additional input to the trained machine learning model 216. At sub-box 820, the private key 632 data and/or biometric data may be provided as another additional input to the trained machine learning model 216, such as the value of the private key 632 and/or any data determined at sub-box 812. In some implementations, wireless communication data (e.g., wireless signal strength data, data related to dropped data packets, etc.) can be provided as input to the trained machine learning model 216 at block 814. In some implementations, the data 114 may be provided as input to the trained machine learning model 216 at block 814, as described herein. In general, any quantifiable data included in a set of features used to train machine learning model 216 may be provided as input to trained machine learning model 216, as described herein. In embodiments where sensor data 604 is provided as input to the model 216, the sensor data 604 constitutes an unknown input for scoring the user account 120 in question.
At 822, the computing system 106 may generate, via the scoring component 212, the trust score 118 associated with the user account 120 associated with the game control data 606 received at block 802 as output from the trained machine learning model 216. The score 118 is related to a probability that the game control data 606 has been generated by the handheld device 602 associated with the client machine 104 that transmitted the game control data 606 to the remote computing system 106. The score 118 may also be considered to relate to a probability that the game control data 606 has been "artificially generated" rather than synthesized and/or modified by software (e.g., an automatically aimed robot). Because various types of data may be input to the trained machine learning model 216 to generate the score 118, determining the score 118 may be based at least in part on the sensor data 604, based at least in part on the game control data 606, based at least in part on determining that the private key matches the private key in the database 116, and/or based at least in part on the length of time the private key 632 has been associated with the user account 120, among other things. In some embodiments, the score 118 is a variable normalized over a range of [0, 1 ]. The trust score 118 may have a monotonic relationship with the probability that game control data 606 has been generated by the handheld device 602. The relationship between the score 118 and the actual probability associated with a particular behavior, while monotonic, may or may not be linear.
Accordingly, process 800 represents a machine learning scoring method in which scores 118 (e.g., trust scores 118) are determined for user accounts 120, the scores indicating a probability that game control data 606 has been generated by handheld device 602. Accordingly, the trust score 118 generated in process 800 may indicate a probability of a player participating in a non-cheating activity by playing the video game 110 using the real handheld device 602. The use of the machine learning model 216 in this scoring process allows for distinguishing between human players and non-human players (e.g., software designed for cheating).
Fig. 9 is a flow diagram of an exemplary process 900 for assigning user accounts 120 to different matches of a multiplayer video game 110 based on machine learned trust scores 118 that are related to probabilities that game control data 606 has been generated by handheld device 602. For discussion purposes, the process 900 is described with reference to the previous figures. Further, as illustrated by the out-of-page reference "D" in FIG. 8 and FIG. 9, process 900 may continue from block 822 of process 800.
At 902, for a logged-in user account 120 that has been scored according to process 800, computing system 106 may assign, via matching component 214, the logged-in user account 120 to an assigned match 218 of the plurality of matches 218, the assignment based at least in part on a machine learning score 118 generated for and associated with the user account 120. The plurality of matches 218 may include at least one match 218(1) specified for a human player, and a second match 218(2) specified for a non-human player, such as those that use software to synthesize and/or modify game control data 606. As mentioned, any number of matches may be defined such that further subdivisions and additional matches may be assigned to the user account 120 at block 902. For example, computing system 106 may define, via matching component 214, any suitable number of matches into which players 102 are to be grouped for playing video game 110 in multiplayer mode, according to various factors, including demand, capacity, and other factors that play a role in the matching process.
As shown in sub-box 904, the assignment of user accounts 120 to the assigned matches may be based on a threshold score. For example, matching component 214 may determine that the scores 118 associated with the logged-in user accounts 120 are less than a threshold score, and may assign the logged-in user accounts 120 to a second match 218(2) specified for the non-human player based on the scores being less than the threshold score. Similarly, the matching component 214 may determine that the scores 118 associated with the logged-in user accounts 120 are equal to or greater than a threshold score, and may assign the logged-in user accounts 120 to a first match 218(1) specified for the human player based on the scores being equal to or greater than the threshold score. However, this is merely one exemplary way to provide matching assignments 124, and other techniques are also contemplated. For example, a clustering algorithm or other statistical method may be used in addition to, or instead of, using the threshold score. In an example, the trust score 118 may be used to preferentially match together user accounts (players) having "similar" trust scores. Given the natural tendency of the distribution of trust scores 118 to be predominantly bimodal across the plurality of user accounts 120, grouping trust scores 118 together based on a similarity metric (e.g., distance metric, variance metric, etc.) may provide similar results to using a threshold score. However, where the matching pool is small (e.g., players who want to play less popular gaming patterns in a small geographic area), it may be useful to match user accounts together by group for matching purposes using a similarity metric, as this may provide a more granular approach that allows the relative importance of trust scores to other matching factors (such as skill level) to be adjusted step by step. In some embodiments, multiple thresholds may be used to "bucket divide (bucketize)" the user accounts 120 into multiple different matches.
As shown in sub-box 906, other factors besides the trust score 118 may be considered in the match assignment at block 902. For example, the match assignment 124 determined at block 902 may be further based on a skill level of a player associated with the logged-in user account, an amount of time the logged-in user account has been waiting to be placed in one of the plurality of matches, a geographic region associated with the logged-in user account, and/or other factors.
As shown in sub-box 908, the allocation operation at block 902 may include a reassignment operation, wherein the user account 120 currently assigned to the first match 218(1) is reassigned to the second match 218(2), or vice versa. For example, the reassignment at the sub-box 908 may include removing the user account 120 from the first match 218 based on the trust score 118 generated for the user account 120 and reassigning the user account 120 to the second match 218. This means that the trust score 118 may occur at runtime while the video game is being played, and if a reasonably high probability that the player is cheating is predicted based on the machine-learned trust score 118 (e.g., using "auto-targeting robot" software), the player may be reassigned in the middle of playing the game. Thus, although the match assignments may be made based on the machine-learned trust scores 118 before the multiplayer video game 110 begins, the computing system 106 may regenerate the trust scores 118 at any suitable time during the game to determine whether the player is to be reassigned to a different match.
At 910, the computing system 106 may cause execution of the video game 110 in an assigned match (e.g., one of the first match 218(1) or the second match 218(2)) for a logged-in user account 120 associated with the client machine 104 used to play the video game 110. For example, computing system 106 may cause execution of video game 110 in a first match 218(1) for a first user account 120(1) associated with a first trust score 118(1), and may cause execution of video game 110 in a second match 218(2) for a second user account 120(2) associated with a second trust score 118 (2). In a streaming implementation, the video game 110 may be executed on the remote computing system 106 at block 910 and streamed to the client machine 104. In other implementations, the remote computing system 106 may provide control instructions to a client application executing the video game 110 on the client machine 104 to execute the video game 110 in the assigned match.
Because the machine learned trust score 118 is used as a factor in the matching process, users who wish to play a video game in a multiplayer mode in an intended manner may be provided with an improved gaming experience. This is because the techniques and systems described herein can be used to match together players (e.g., non-human players) that may be performing poorly (e.g., cheating by synthesizing and/or modifying game control data 606 using software), and to isolate these players from other trusted players that may be playing video games legally.
Although the subject matter has been described in language specific to structural features, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features described. Rather, the specific features are disclosed as exemplary forms of implementing the claims.

Claims (20)

1. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, cause performance of operations comprising:
receiving game control data and sensor data from a client machine, the game control data and the sensor data associated with a user account registered with a video game service;
providing the sensor data as input to a trained machine learning model;
generating a trust score associated with the user account as an output of the trained machine learning model, the trust score relating to a probability that the game control data was generated by a handheld game controller associated with the client machine;
assigning the user account to an assigned match of a plurality of matches for playing the video game in a multiplayer mode based at least in part on the trust score, the plurality of matches including at least a first match specified for a human player and a second match specified for a non-human player, the non-human player using software to generate synthetic or modified game control data; and
causing execution of the video game in the assigned match for the user account.
2. The one or more non-transitory computer-readable media of claim 1, wherein the sensor data comprises raw sensor data generated by one or more physical sensors of the handheld game controller, and wherein the assigned match is the first match specified for the human player.
3. The one or more non-transitory computer-readable media of claim 1, wherein at least one of the game control data or the sensor data is received as encrypted data, the operations further comprising:
decrypting the encrypted data;
determining that a private key used to encrypt the encrypted data is associated with the user account; and
determining that the private key matches one of a plurality of private keys maintained in a database, the plurality of private keys associated with a plurality of trusted handheld game controllers,
wherein the generating the trust score is further based on the determining that the private key matches the one of the plurality of private keys.
4. The one or more non-transitory computer-readable media of claim 1, wherein the assigning the user account to the assigned match comprises:
removing the user account from the first match; and
reassigning the user account to the second match.
5. A method, the method comprising:
receiving, by a computing system, game control data and sensor data from a client machine, wherein the game control data and the sensor data are associated with a user account logged into a video game service, and wherein the game control data is to be processed by a video game for controlling an aspect of the video game;
determining, by the computing system, a score associated with the user account by:
providing the sensor data as input to a trained machine learning model; and is
Generating the score as an output of the trained machine learning model, the score indicating a probability that the game control data has been generated by a handheld device associated with the client machine;
assigning the user account to an assigned match of a plurality of matches based at least in part on the score, the assigned match comprising at least one of a first match or a second match;
causing, by the computing system, execution of the video game in the assigned match against the user account.
6. The method of claim 5, wherein the sensor data comprises raw sensor data generated by one or more physical sensors of the handheld device.
7. The method of claim 6, wherein the raw sensor data comprises at least one of raw gyroscope data, raw accelerometer data, or raw capacitive sensor data.
8. The method of claim 6, wherein the handheld device is a handheld game controller.
9. The method of claim 5, further comprising: prior to said determining said score:
accessing, by the computing system, historical sensor data associated with a sampling set of user accounts registered with the video game service, the historical sensor data having been received from a client machine with historical game control data processed by the video game or a different video game to control an aspect of the video game or the different video game;
tagging each user account of the sampling set of user accounts with a tag indicating whether the historical game control data associated with the user account was generated by a handheld device when a human player holds and operates the handheld device to generate the historical game control data associated with the user account; and
training a machine learning model using the historical sensor data as training data to obtain the trained machine learning model.
10. The method of claim 9, wherein the training the machine learning model comprises setting a weight for at least one feature of a set of features derived from the historical sensor data or parameters internal to the machine learning model.
11. The method of claim 5, wherein the determining the score further comprises:
providing the game control data as additional input to the trained machine learning model in addition to providing the sensor data as the input to the trained machine learning model,
wherein the score is generated based at least in part on the game control data provided to the trained machine learning model as the additional input.
12. The method of claim 5, wherein at least one of the game control data or the sensor data is received as encrypted data, the method further comprising:
decrypting, by the computing system, the encrypted data;
determining, by the computing system, that a private key used to encrypt the encrypted data is associated with the user account; and
determining, by the computing system, that the private key matches one of a plurality of private keys maintained in a database, the plurality of private keys associated with a plurality of trusted handheld devices,
wherein the determining the trust score is further based on the determining that the private key matches the one of the plurality of private keys.
13. The method of claim 12, further comprising determining a length of time that the private key has been associated with the user account, wherein the determining the score is further based on the length of time.
14. The method of claim 5, wherein assigning the user account to the at least one of the first match or the second match based at least in part on the score comprises determining whether the score satisfies a threshold score.
15. A system, the system comprising:
one or more processors; and
a memory storing computer-executable instructions that, when executed by the one or more processors, cause the system to:
receiving game control data and sensor data from a plurality of client machines, wherein the game control data and the sensor data are associated with a logged-in user account of a plurality of user accounts registered with a video game service, and wherein the game control data is to be processed by a video game for controlling one or more aspects of the video game;
determining scores for the logged-in user accounts, wherein each of the scores is determined by:
providing the sensor data associated with a single user account as input to a trained machine learning model; and is
Generating a score associated with the single user account as an output of the trained machine learning model, the score relating to a probability that the game control data is generated by a handheld device associated with a client machine of the plurality of client machines, the game control data associated with the single user account;
assigning a first subset of the logged in user accounts to a first match and a second subset of the logged in user accounts to a second match based at least in part on the scores determined for the logged in user accounts; and is
Cause execution of the video game in the first match for the first subset of the logged in user accounts and in the second match for the second subset of the logged in user accounts.
16. The system of claim 15, wherein the sensor data associated with the first subset of the logged-in user accounts comprises raw sensor data generated by one or more physical sensors of a handheld device.
17. The system of claim 16, wherein the raw sensor data comprises at least one of raw gyroscope data, raw accelerometer data, or raw capacitive sensor data.
18. The system of claim 15, wherein the respective ones of the scores are further determined by:
in addition to providing the sensor data associated with the single user account as the input to the trained machine learning model, providing the game control data associated with the single user account as additional input to the trained machine learning model,
wherein the score associated with the single user account is generated based at least in part on the game control data associated with the single user account provided to the trained machine learning model as the additional input.
19. The system of claim 15, wherein at least one of the game control data or the sensor data is received as encrypted data, and wherein the computer-executable instructions, when executed by the one or more processors, further cause the system to:
decrypting the encrypted data;
determining that a private key used to encrypt the encrypted data is associated with one of the logged in user accounts; and is provided with
Determining that the private key matches one of a plurality of private keys maintained in a database, the plurality of private keys associated with a plurality of trusted handheld devices,
wherein the single score associated with the one of the logged in user accounts is further determined based on determining that the private key matches the one of the plurality of private keys.
20. The system of claim 19, wherein the computer-executable instructions, when executed by the one or more processors, further cause the system to determine a length of time that the private key has been associated with the one of the logged-in user accounts, wherein the single score associated with the one of the logged-in user accounts is further based on the length of time.
CN202080073805.2A 2019-10-24 2020-10-22 Machine learning trust scoring based on sensor data Pending CN114630701A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/663,041 US11052311B2 (en) 2018-09-07 2019-10-24 Machine-learned trust scoring based on sensor data
US16/663,041 2019-10-24
PCT/US2020/056920 WO2021081248A1 (en) 2019-10-24 2020-10-22 Machine-learned trust scoring based on sensor data

Publications (1)

Publication Number Publication Date
CN114630701A true CN114630701A (en) 2022-06-14

Family

ID=75620846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080073805.2A Pending CN114630701A (en) 2019-10-24 2020-10-22 Machine learning trust scoring based on sensor data

Country Status (5)

Country Link
EP (1) EP4025312A4 (en)
JP (1) JP2022552878A (en)
KR (1) KR20220081992A (en)
CN (1) CN114630701A (en)
WO (1) WO2021081248A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117537786A (en) * 2024-01-09 2024-02-09 中国海洋大学 Multi-sensor rapid connection method, device and system for deep sea submersible

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020187828A1 (en) * 2001-06-12 2002-12-12 Jamal Benbrahim Method and apparatus for securing gaming machine operating data
US7169050B1 (en) * 2002-08-28 2007-01-30 Matthew George Tyler Online gaming cheating prevention system and method
US7677970B2 (en) * 2004-12-08 2010-03-16 Microsoft Corporation System and method for social matching of game players on-line
US8221238B1 (en) * 2005-04-19 2012-07-17 Microsoft Corporation Determination of a reputation of an on-line game player
JP4765384B2 (en) * 2005-04-21 2011-09-07 株式会社セガ Game system
US20100203960A1 (en) * 2005-07-20 2010-08-12 Wms Gaming Inc. Wagering game with encryption and authentication
US7976376B2 (en) * 2007-08-13 2011-07-12 Bally Gaming, Inc. Methods for providing amusement
US20100323794A1 (en) * 2009-06-18 2010-12-23 Yui-Zhang Su Sensor based human motion detection gaming with false positive detection
US9839838B1 (en) * 2013-05-14 2017-12-12 Take-Two Interactive Software, Inc. System and method for online community management
EP3259678B1 (en) * 2015-02-18 2021-03-31 OS - New Horizon Personal Computing Solutions Ltd. Device and systems to securely remotely access, manage and store an enterprise's data, using employees' mobile devices
US20170193845A1 (en) * 2015-12-30 2017-07-06 International Business Machines Corporation Detection of anomalous behavior in digital education settings based on portable device movement
US10319182B2 (en) * 2016-02-15 2019-06-11 Zynga Inc. Automatically identifying potentially fraudulent gaming accounts
US10207189B1 (en) * 2018-04-16 2019-02-19 Huuuge Global Ltd. System and method for determining type of player in online game
US11133940B2 (en) * 2018-12-04 2021-09-28 Journey.ai Securing attestation using a zero-knowledge data management network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117537786A (en) * 2024-01-09 2024-02-09 中国海洋大学 Multi-sensor rapid connection method, device and system for deep sea submersible
CN117537786B (en) * 2024-01-09 2024-04-26 中国海洋大学 Multi-sensor rapid connection method, device and system for deep sea submersible

Also Published As

Publication number Publication date
WO2021081248A1 (en) 2021-04-29
EP4025312A4 (en) 2023-09-20
KR20220081992A (en) 2022-06-16
JP2022552878A (en) 2022-12-20
EP4025312A1 (en) 2022-07-13

Similar Documents

Publication Publication Date Title
US11052311B2 (en) Machine-learned trust scoring based on sensor data
US11504633B2 (en) Machine-learned trust scoring for player matchmaking
US11717748B2 (en) Latency compensation using machine-learned prediction of user input
CN102592059B (en) Network game system, game and server unit
KR20170104940A (en) Multiplayer video game matchmaking optimization
WO2021247267A1 (en) Methods and systems for processing disruptive behavior within multi-player video game
US11253785B1 (en) Incentivizing fair gameplay through bot detection penalization within online gaming systems
US20230074342A1 (en) Method to detect and counteract suspicious activity in an application environment
US20240066412A1 (en) Gaming controller with disc wheel and display screen
US11033813B2 (en) Latency erasure
CN114630701A (en) Machine learning trust scoring based on sensor data
CA3087629C (en) System for managing user experience and method therefor
US10279266B2 (en) Monitoring game activity to detect a surrogate computer program
US11465055B2 (en) Systems and methods for detecting and preventing fraudulent in-app activities
JP2015231482A (en) Program and game system
US11745101B2 (en) Touch magnitude identification as input to game
KR20190118273A (en) Apparatus and method for providing reservation function in a game
Bauer Using Metrics and Keystroke Dynamics in Games to measure Player Experience

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination