US20080305869A1 - Prevention of cheating in on-line interaction - Google Patents

Prevention of cheating in on-line interaction Download PDF

Info

Publication number
US20080305869A1
US20080305869A1 US12/103,522 US10352208A US2008305869A1 US 20080305869 A1 US20080305869 A1 US 20080305869A1 US 10352208 A US10352208 A US 10352208A US 2008305869 A1 US2008305869 A1 US 2008305869A1
Authority
US
United States
Prior art keywords
threat
program
assets
game program
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/103,522
Inventor
Shmuel Konforty
Yitzhak Shimon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognisafe Ltd
Original Assignee
Cognisafe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognisafe Ltd filed Critical Cognisafe Ltd
Priority to US12/103,522 priority Critical patent/US20080305869A1/en
Assigned to COGNISAFE LTD. reassignment COGNISAFE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KONFORTY, SHMUEL, SHIMON, YITZHAK
Publication of US20080305869A1 publication Critical patent/US20080305869A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/75Enforcing rules, e.g. detecting foul play or generating lists of cheating players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/77Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3241Security aspects of a gaming system, e.g. detecting cheating, device integrity, surveillance
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/326Game play aspects of gaming systems
    • G07F17/3272Games involving multiple players
    • G07F17/3276Games involving multiple players wherein the players compete, e.g. tournament
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/552Details of game data or player data management for downloading to client devices, e.g. using OS version, hardware or software profile of the client device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5586Details of game data or player data management for enforcing rights or rules, e.g. to prevent foul play
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6027Methods for processing data by generating or executing the game program using adaptive systems learning from user actions, e.g. for skill level adjustment

Definitions

  • the present invention relates generally to computer systems and software, and specifically to detection of cheating in on-line interactions, such as games.
  • Cheating is defined as an act of lying, deception, fraud, trickery, imposture, or imposition. Cheating is typically employed to create an unfair advantage, often at the expense of others. Fraud is a particular type of cheating, in which a victim is illegally deceived for the personal gain of the perpetrator.
  • U.S. Patent Application Publication 2007/0276521 whose disclosure is incorporated herein by reference, describes a method for maintenance of “community integrity” in a gaming network, in which devices interacting with a particular game are monitored. Indicia of the violation of certain rules that define fair game play may be identified, and a user and/or device engaged in illicit game play activity may be identified as a result. Other users in the gaming network may be informed of the particular user's previous illicit game activity.
  • European Patent Application EP 1669115 A1 whose disclosure is incorporated herein by reference, describes a system for conducting a game of chance using a communication network.
  • the players must have credentials with which to identify themselves remotely. If the players do not have these credentials, they must be issued by a certification authority and certification agent. To request credentials, the player downloads a player agent, which communicates with the certification agent using a secure communication protocol and digital certificate.
  • U.S. Pat. No. 7,169,050 whose disclosure is incorporated herein by reference, describes a system and method for prevention of cheating during online gaming in which a first computer system receives information regarding cheaters from a second computer system. Cheaters identified in this manner are prevented from online gaming on the first computer system.
  • a master database of cheaters resides on one or more master servers, which assemble a master list of cheaters aggregated from individual game servers. In this way, once a cheater is banned on one game server, information identifying the cheater is transmitted to the master databases of the master servers for distribution to the other game servers.
  • the program running on the computer communicates with a server, which monitors the activities of a community of participants.
  • the server verifies that the computer is being monitored by the program and provides an indication to the other members of the community that the user can be trusted not to cheat.
  • the user may similarly receive an indication whether each of the participants in a game is or is not running the monitoring program, and may thus choose to play only with trusted participants.
  • a method for preventing cheating by users of client computers running a network game program includes installing a monitoring program, independent of the network game program, on a group of the client computers so as to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program.
  • a message is conveyed over a network to a server from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer. Responsively to the message, a communication is received from the server at the client computer indicating which ones of the client computers have actuated the monitoring program.
  • the method includes displaying on the client computer a list of the client computers have actuated the monitoring program, and receiving from a user of the client computer a selection, based on the list, of participants with whom to join in playing the game program.
  • the monitoring program may be configured so as to permit a user of the client computer to deactuate the monitoring program with respect to the game program, and conveying the message may include informing the server when the monitoring program is deactuated.
  • a method for preventing cheating by users of computers running a network game program includes installing a monitoring program, independent of the network game program, on the computer.
  • the network game program is run on the computer while detecting use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets.
  • an anomalous utilization pattern of the assets is detected, which is indicative of a threat of cheating in the network game program, and a notification of the threat is output to a user of the computer.
  • detecting the use of the assets includes learning the pattern during at least one of installation of the game program and playing of the game program by the user.
  • detecting the use of the assets includes applying a threat map based on the use of the assets, and detecting the anomalous utilization pattern includes receiving an event associated with one of the assets, and associating the event with the threat map with a likelihood that is greater than a predetermined threshold.
  • the threat map relates to a first event
  • associating the event with the threat map may include receiving a second event that is not in the first threat map, and associating the second event with the threat map by a process of semantic inquiry.
  • the method may include updating the threat map responsively to the semantic inquiry by identifying a plurality of candidate threat maps, computing a respective hypothetical likelihood that the second event is associated with each of the candidate threat maps, and selecting one of the candidate threat maps for update based on the hypothetical likelihood.
  • running the network game program includes learning the pattern of the normal utilization using the monitoring program autonomously, independently of any identification of the assets by the user.
  • detecting the anomalous utilization pattern includes receiving an event indicative of a deviation from the pattern of normal utilization in the use of at least one asset selected from a group of the assets consisting of CPU utilization, network utilization, files and directories.
  • running the network game program includes calculating a normal centralism of an executable file during the normal utilization of the assets, and wherein detecting the anomalous utilization pattern includes detecting a deviation from the normal centralism.
  • the instructions cause the client computers to convey over a network to a server a message from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer, and responsively to the message, to receive from the server at the client computers a communication indicating which ones of the client computers have actuated the monitoring program.
  • a computer software product for preventing cheating by users of computers running a network game program including a computer-readable medium in which program instructions are stored, the instructions including a monitoring program for installation on a computer independently of the network game program, wherein the instructions cause the computer, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat to a user of the computer.
  • computing apparatus including an output device and a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor to detect an anomalous use of an asset of the computing apparatus that is indicative of an attempt to cheat in the game program, and further causes the processor to convey over a network to a server a message indicating that the monitoring program has been actuated on the computing apparatus, and responsively to the message, to receive from the server a communication identifying other computers that have actuated the monitoring program, and to provide to a user of the computing apparatus, via the output device, list of users of the other computers identified by the communication.
  • computing apparatus including an output device and a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat via the output device to a user of the computing apparatus.
  • FIG. 1 is a schematic, pictorial illustration of a system for on-line gaming, in accordance with an embodiment of the present invention
  • FIG. 2 is a block diagram that schematically illustrates elements of computer software for detection of cheating, in accordance with an embodiment of the present invention
  • FIG. 3 is a flow chart that schematically illustrates a method for learning patterns of asset use by a computer game, in accordance with an embodiment of the present invention
  • FIG. 4 is a flow chart that schematically illustrates a method for assessing threat potentials, in accordance with an embodiment of the present invention
  • FIG. 6 is a flow chart that schematically illustrates a method for game user learning, in accordance with an embodiment of the present invention
  • FIG. 7 is a flow chart that schematically illustrates a method for adjusting asset threat potentials, in accordance with an embodiment of the present invention.
  • FIG. 8 is a flow chart that schematically illustrates a method for updating statistical results in game user learning, in accordance with an embodiment of the present invention
  • FIG. 10 is a flow chart that schematically illustrates a method for inquiry management, in accordance with an embodiment of the present invention.
  • FIG. 12 is a flow chart that schematically illustrates a method for evaluating threat lines, in accordance with an embodiment of the present invention.
  • FIG. 13 is a flow chart that schematically illustrates a method for pseudo-semantic inquiry, in accordance with an embodiment of the present invention.
  • FIG. 1 is a schematic, pictorial illustration of a system 20 for on-line gaming, in accordance with an embodiment of the present invention.
  • Multiple participants 24 play a game together using respective client computers 22 , which are connected to communicate during the game via a network 26 , such as the Internet.
  • Each computer 22 comprises a processor 28 with suitable input and output devices, such as a video monitor 30 and a joystick 32 , as well as an interface to network 26 .
  • the game in question may be server-based or peer-to-peer: The principles of the present invention, as presented in detail hereinbelow, are not tied to a specific game or architecture.
  • the anti-cheating program that is described hereinbelow is capable of learning and monitoring multiple games, of various different types, they may be played using a given computer.
  • computers 22 are illustrated in FIG. 1 as personal desktop computers, the architecture and methods described hereinbelow are equally applicable to computing devices of other types, such as servers, as well as dedicated game consoles and mobile computing and communication devices.
  • At least some of client computers 22 are linked to a “trust net,” which is coordinated by a server 34 .
  • the client program informs the server of the identity of the participant who is using the computer by means of a unique identifier (such as a digital signature), and also informs the server of the game that the participant wishes to play.
  • the client program learns how the game in question uses the assets of the client computer, such as files, computational power, and communication resources. During the game, the client program monitors the use of these assets. Upon detecting an anomalous event, which may be indicative of an attempt to cheat during the game, the client program typically informs both participant 24 and server 34 . Such anomalous events may be indicative of either an attempt by another player to cheat against the participant or an attempt to cheat by the participant himself.
  • the server may keep records of anomalous events and the participants who were involved in them in order to assemble a list of known or suspected cheaters.
  • the client program on computers 22 is itself secured against tampering.
  • the program may be digitally signed, and server 34 may check the digital signature as part of the authentication process before the game.
  • Participant 24 may choose to inactivate the client program at certain times, but in such cases, server 34 will be informed that the client computer in question is not being monitored and is therefore susceptible to cheating.
  • Server 34 may give participants 24 information regarding which other players are currently members of the trust net, i.e., which players have the client program installed and active on their own computers. For example, as shown in FIG. 1 , the server may generate a window 38 on a display 36 listing players who are participating in or wish to participate in the game in question. A secure indicator 40 , controlled by the server, marks the names of players who are part of the trust net. If a given player has not installed the client program or has turned it off, the secure indicator will not appear next to his or her name. (Players with a history of cheating may also be marked by the server.) Based on the information in window 38 , participant 24 may choose to play only with trust net members.
  • participant program will still monitor the client computer and will alert the participant to anomalous events, which will protect the participant against some types of cheating, but without the more comprehensive protection afforded by the trust net.
  • FIG. 2 is a block diagram that schematically illustrates elements of a program 50 for detection of cheating, in accordance with an embodiment of the present invention.
  • Program 50 includes software modules and data structures that are used in learning and monitoring computer 22 .
  • the components of program 50 may be downloaded to computer 22 in electronic form, over a network, for example. Alternatively or additionally, these program components may be furnished and/or stored on tangible computer-readable media, such as optical, magnetic, or electronic storage media.
  • Program 50 implements a cognitive engineering architecture, based on the following principles, inter alia:
  • Rule base module 52 activates backward and forward chain reasoning algorithms to populate and enrich a full knowledge base 66 , including preliminary and conclusive information.
  • the rule base module may continually analyze the knowledge base in order to generate one of the following generic decisions with respect to each detected event:
  • Knowledge base 66 is typically divided between private and public knowledge information. The distinction between those two categories of information reflects the access and retrieval permissions for each category:
  • the private part of the knowledge base contains information that was gathered from a specific computational node, while the public part of the knowledge base contains common information provided and maintained centrally, by server 34 , for example.
  • Both private and public knowledge bases can share the same concept domain. Consolidation of the information from both the private and public knowledge bases generates the full concept domain.
  • Program 50 supports the following main session types:
  • Top-level modules of program 50 include rule base module 52 , sieve module 100 , a reasoning module 60 and a learning module 62 , which interact with knowledge base 66 and a number of subsidiary modules.
  • Rule base module 52 sieve module 100
  • reasoning module 60 a reasoning module 60
  • learning module 62 which interact with knowledge base 66 and a number of subsidiary modules.
  • the components and functions of the program modules are described below:
  • Rule base module 52 manages the overall program state and the other modules. Functions of module 52 include:
  • Rule base module 52 manages the following main processes: sieve module 100 ; reasoning module 60 (including an inquiry manager 70 , a threat map-based identification (TMBI) module 72 , and TMSI module 74 ); learning module 62 (including SWL module 76 , GUL module 78 and TMU module 80 ); and sieve module 22 .
  • Module 52 uses information that was gathered during the activity sessions, which is stored in a metadata table 88 .
  • Sieve module 100 manages data collection processes using sensor modules 104 , 106 , 108 , 110 .
  • Functions of the sieve module include:
  • the sieve module serves as database feeder, configuration manager and session manager.
  • the sieve module converts information from a string representation that is obtained from sensors 104 , 106 , 108 , 110 to a database representation.
  • the database feeder may use a flexible algorithm, based on external scripts, to enable the knowledge base architecture to be updated.
  • the database feeder typically receives input in the form of strings, containing name-value pairs separated by commas.
  • the scripts translate input fields or expressions based on input fields into database rows.
  • the configuration manager drops irrelevant sensor input. It may also use a flexible algorithm, based on external predicate scripts, which may be specified in an XML file.
  • the session manager separates sessions and may divide sessions into clusters.
  • the session manager encapsulates session-related information and provides this information to other modules.
  • the database feeder uses this session information in order to fill in corresponding fields in log data records.
  • Trust net module 54 supports communication with server 34 , as noted above. This module performs the following functions:
  • Threats warden module 56 collects information on computer 22 regarding local activities in order to inform server 34 of possible cheating.
  • User interface module 58 permits interaction between participant 24 and program 50 .
  • the main functions that this module include:
  • Learning module 62 contain modules 76 - 80 , as mentioned above, which implement the main learning functionalities of program 50 :
  • a service algorithms module 64 performs major mathematical computations used by program 50 .
  • Reasoning module 60 divides input data by type and activates modules 70 - 74 in order to apply the appropriate processing:
  • Knowledge base 66 serves as the repository of the relevant data enriched by semantic-type meta-information (data-objects-concepts) collected by the modules of program 50 , including relations between the objects and concepts.
  • the knowledge base serves the program modules and enables the program to continually learn the features of operation of the protected game software.
  • the adaptive learning properties of program 50 enable the same backbone software to be used to protect both games for which partial prior knowledge exists and games for which no prior knowledge exists at all.
  • the knowledge base contains the following groups of classes:
  • the logs of the knowledge base contain all incoming information, including information generated both by the computer itself and by components of program 50 .
  • Information generated by the computer may include, for example, operating system events “as is.”
  • Logs generated by program 50 may include, for example, program parameters or a log of events specific to a particular protected game.
  • the logs typically include user-level and system-level event logs regarding protected software, as well as overall system information.
  • the logs typically use the following knowledge classes:
  • GL 82 may include a protected software user-level events log, which contains information on the events that are specific and unique for the software that is being protected. If the protected software is a multi-user online game, for example, then the events can be of the type: “The user N 0001 has entered player group G 0001 ,” or “The user N 0002 has left the chat room,” or “My current shots-per-second rate is 26.7.”
  • the ontology frame of this class includes:
  • GL 82 may also include a protected software system-level events log class, which contains a detailed journal of system events based on API commands. Examples of such events may include “change process priority,” “delete directory,” “edit file permissions,” and “start process.”
  • the ontology frame of this class includes:
  • Event table 84 may include an overall system information log knowledge class, which contains a detailed journal of system events based on API commands, similar to those in the GL table.
  • the ontology frame of this class includes:
  • the threat knowledge group of classes in the knowledge base typically includes the following classes:
  • TPT 90 contains knowledge about the measure of threat potential of specific elements (structures) of objects or groups of objects or specific situations or ranges of situations. For example, it may contain the threat potential value of an image (executable file) of a process or of a group of APIs, or the threat potential value of a situation in which a specific API is applied to any file in a specific directory.
  • Each instance of this class is a set (collection, un-indexed sequence) of any number of instances of the threat lines class. Since the threat lines that build up the TPT class also build up the threat map (TM) class, the TPT class is a subspace of the TM domain. The TPT class provides a rough representation of the TM class in order to reduce computational cost.
  • the threat lines class defines elemental test conditions. It includes:
  • the threat elements knowledge class contains the main part of each threat line. A number of different threat lines may contain the same threat element.
  • the frame of a threat element includes:
  • Threat maps 94 use the threat maps and distances knowledge class, which contains:
  • Sensor modules 104 , 106 , 108 , 110 gather information regarding the current activity and overall machine state of computer 22 .
  • the sensor modules are small program modules, which perform the following sorts of functions:
  • the specific sensor modules shown in FIG. 2 include the following:
  • FIG. 3 is a flow chart that schematically illustrates a method for learning patterns of asset use by a computer game, in accordance with an embodiment of the present invention.
  • the user submits a request, via UI module 58 , for program 50 to learn a new game, at a new game selection step 120 .
  • the UI module opens a dialog window asking the user to specify the installation file of the game in question, at a file request step 122 .
  • the user provides (or browses for) the full path of the game program, at a file provision step 124 .
  • the UI module now retrieves the installation file, at a file retrieval step 126 , and transfers control to rule base module 52 .
  • the rule base module sets up the required configuration and then invokes sieve module 100 , at a configuration step 128 .
  • the configuration data indicate the operations, processes and parameters to be used by the sieve module in including or excluding data provided by sensor modules 104 - 110 during installation of the game. For instance, upon installation, there may be “uninteresting” types of assets, which are unlikely to be used in a cheating scheme (such as video and audio files). Events involving these assets can be sieved before storage.
  • Sieve module 100 then logs the data transmitted by the sensors during installation of the game, at a logging step 130 .
  • the logged data are typically stored in a temporary memory.
  • the sieve module Upon completion of the installation, the sieve module returns to the rule base module with either a success or a failure indication.
  • the rule base module invokes SWL module 76 to process the logged data, at a SWL invocation step 132 . Based on this processing, the SWL module adds new instances of game assets to TPT 90 , at a table addition step 134 .
  • FIG. 4 is a flow chart that schematically shows details of a method used in assessing threat potentials at step 134 , in accordance with an embodiment of the present invention.
  • SWL module 76 loops over various types of assets that have been predefined within the threat model, at an asset type review step 140 . For each type of assets, the SWL module ranks each asset found in the log that was generated at step 130 .
  • a subroutine implementing an algorithm that may be used at step 140 (written in Visual Basic for Applications (VBA)) is listed below in Appendix A.
  • the SWL module performs an additional ranking process, at a special ranking step 142 .
  • Special assets are those asset types that require special treatment in the ranking process. Examples of special assets include registry keys and folders, having corresponding asset type registry values and files, respectively, which are taken into account in the ranking process.) Details of this step are presented below in FIG. 5 .
  • the ranking function may also be determined by a single variable, but not simply by the number of event occurrences in which a given special asset was involved.
  • each type of special assets has a corresponding type of assets. The ranking of the special assets is determined in part by the corresponding game assets that they hold.
  • C:/a/b/d is at the bottom of a directory tree.
  • its importance may be determined simply by the number (say X) of the assets that it holds, such as directories and files.
  • C:/a/b holds the same number of files as C:/a/b/d (apart from the files that are held in C:/a/b/d), and that C:/a/b does not have any other descendants besides C:/a/b/d. Therefore, C:/a/b holds a total of 2X files.
  • the SWL module will rank C:/a/b/d and C:/a/b as having the same importance, because each one of them “is responsible for” holding the same amount of files (X).
  • C:/a/b has another subdirectory besides C:/a/b/d, i.e., C:/a/b/d has a sibling C:/a/b/e, which holds 5X files.
  • the SWL module will assign C:/a/b/e a measure of importance that is five times higher than that of C:/a/b/d.
  • the importance of C:/a/b/d is Y, then the importance of C:/a/b/e is 5Y.
  • the cumulative number of files held in C:/a/b is now 7X (X+X+5X), but its importance should still be lower than that of C:/a/b/e.
  • the ranking of the directory takes into account the subdirectory with the maximal number of files over all subdirectories.
  • the ranking of C:/a/b is 2Y (based on the difference 7X ⁇ 5X).
  • Appendix B hereinbelow presents a subroutine, written in Visual Basic for Applications (VBA), that implements a ranking algorithm that may be used at step 142 .
  • VBA Visual Basic for Applications
  • SWL module 76 computes the threat potential of each asset (including special assets) at a threat potential computation step 144 .
  • Various formulas may be used to determine the threat potential as a function of rank, as long as the formula returns a valid value, i.e., a probability.
  • the SWL module may set the threat potential for each asset to 1 (one), but these threat potentials may subsequently be reduced by GUL module 78 (as described below with reference to FIG. 7 ).
  • the SWL module then writes the instances of the assets (including special assets) and their respective threat potentials to TPT 90 .
  • FIG. 5 is a flow chart that schematically shows details of the method for ranking special assets carried out at step 142 , in accordance with an embodiment of the present invention.
  • SWL module 76 loops over all of the special asset types, at a type review step 150 .
  • the SWL module counts the total number of the corresponding assets, at an asset counting step 152 .
  • the SWL module counts the total number of files in each directory, down to the bottom of the directory tree.
  • the SWL module Based on the counts made at step 152 , the SWL module then ranks each special asset found in the log, at a ranking step 154 .
  • the ranking formula used at this step for a given special asset d is:
  • is a fixed coefficient
  • MAX(*) is a function that returns the maximum out of a set of numbers
  • SUB(*) returns all the descendants at the next generation of a given special asset.
  • FIG. 6 is a flow chart that schematically illustrates a method for game user learning, in accordance with an embodiment of the present invention.
  • a user of computer 22 uses UI module 58 to request that program 50 learn a game, at a learning selection step 160 .
  • the user specifies that the learning is to take place while the game is being played, without protection against cheating.
  • the UI module presents a dialog window offering the existing game-user profiles for selection by the user, at a profile presentation step 162 .
  • the user selects the desired profile from the list, at a profile selection step 164 .
  • the UI modules retrieves the profile and transfers control to rule base module 52 , at a profile retrieval step 166 .
  • the rule base module sets up the required configuration and then invokes sieve module 100 , at a sieve invocation step 168 .
  • the configuration indicates what events the sieve should monitor (as transmitted by sensor modules 104 - 110 ) and the processes and parameters the transmitted data should include or exclude.
  • the sieve module transfers the data from the sensor modules to knowledge base 66 until the game ends, or until the user quits the learning process, at a data transfer step 170 .
  • step 170 the sieve module returns control to rule base module 52 , which then invokes GUL module 78 , at a GUL invocation step 172 .
  • the GUL adds new asset instances and modifies existing instances with respect to the game in question in knowledge base 66 . Details of step 172 are shown below in FIG. 7 .
  • the GUL module measures metric distances between each pair of assets within each type.
  • FIG. 7 is a flow chart that schematically illustrates a method for adjusting asset threat potentials, carried out by GUL module 78 at step 172 , in accordance with an embodiment of the present invention.
  • the GUL module Based on the data transferred at step 170 , the GUL module adds new instances of threatened assets to TPT table 90 and/or modifies existing instances, in a table modification step 180 .
  • the algorithm used at step 170 is similar to that presented in FIG. 4 , except that the counter C A is now configuration-dependent.
  • C A becomes two-fold, wherein C A with and C A without respectively represent the number of event occurrences in which a given asset was involved with the game being played and without it. (In cases in which the configuration instructs the sieve module to transmit only command events invoked by the game process itself, C A without will accumulate a null value.)
  • the rank is then given by:
  • GUL module 78 creates and modifies statistical results in accordance with statistical requests defined in the knowledge base, at a statistics calculation step 180 . Details of this step are shown below in FIG. 8 . As part of this step, the GUL module may adjust the threat potentials of the assets from their initial value of 1 to a new value according to the frequency of use of the assets and the stage (cluster) in which each asset is used.
  • the GUL module calculates centralism for each process image (executable) file, at a centralism computation step 184 . Details of this step are shown below in FIG. 9 .
  • the centralism is determined for each image and each user and provides information on how central the game is to the user and the computer while it is being played. In other words, for each known image file in the system, the centralism indicates how often the file operates while the game is running and what is the time proportion between the image processes and the game process overall. Centralism may be defined separately for the launch phase of the game (“launch centralism”), as opposed to the centralism throughout the game.
  • GUL module 78 returns to rule base 52 all the similar pairs of assets, along with the distances between the assets in the pair, at an asset pairing step 186 .
  • the distance is given by the formula:
  • each element ⁇ i ⁇ right arrow over ( ⁇ ) ⁇ is a predefined weight (scalar) and ⁇ i .
  • Each element D i of ⁇ right arrow over (D) ⁇ is given by:
  • f i (Xj) is a numerical value assigned to a given asset.
  • f i (Xj) could be the size of a given file X j , priority of a given process X j , or any other predefined arithmetic manipulation on numeric attributes associated with the asset, which is either stored in the knowledge base or calculated based on stored values. If the metric distance between two given assets is lower than a predefined threshold, then the assets are considered to be similar for the purposes of analyzing events and assessing threats.
  • FIG. 8 is a flow chart that schematically shows details of the method used for updating statistical results at step 182 in game user learning, in accordance with an embodiment of the present invention.
  • Statistical requests refer to variables with stochastic behavior (for example, CPU utilization, network utilization, in-game variables, etc.) Statistical requests may also apply to some variables that are not stochastic in nature, such as the order of events, which allows for learning patterns in the game software.
  • GUL module 78 processes the statistical requests, as noted above, and returns statistical results to knowledge base 66 .
  • GUL module 78 computes and updates the average value of the relevant variable, as well as the corresponding standard deviation and a histogram of the variable.
  • the GUL module updates the average, at an average computation step 190 , using the formula:
  • y n y n - 1 + x n - y n - 1 n
  • the GUL module computes the standard deviation, at a deviation computation step 192 , using the formula:
  • GUL module 78 computes the histogram of the variable in question, at a histogram computation step 194 .
  • the histogram is defined as having fixed number of bins, but the GUL module may add new extrema (i.e., a new minimum or a new maximum), which will result in changes to the ranges of the bins and thus to recalculation of the bin values.
  • FIG. 9 is a flow chart that schematically shows details of the method used in computation of file centralism at step 184 , in accordance with an embodiment of the present invention.
  • GUL module 78 determines the “launch centralism” for each executable file, at a launch centralism computation step 200 .
  • the launch centralism depends on the number of processes that are running at the start of a new cluster in the course of running the game program.
  • the GUL module determines the “throughout centralism” of the file, at a throughout centralism computation step 202 . This type of centralism is based on the processing time of the executable file in question in comparison with the overall game processing time.
  • centralism characteristics of the executable files that are learned by the GUL module are subsequently used in detecting exceptions to the user's habits. Anomalous deviations from normal centralism at both the launch and game processing phases have been found to be a good indicator that cheating may be going on.
  • FIG. 10 is a flow chart that schematically illustrates the operation of inquiry manager 70 , in accordance with an embodiment of the present invention.
  • the inquiry manager manages the process of testing an event against threat maps.
  • the inquiry manager actuates TMBI module 72 , which loops over all relevant threat maps i and computes the likelihood that the event is relevant to each of the maps, at a likelihood computation step 210 .
  • the TMBI module finds the overall likelihood that the current event is a threat, at an overall assessment step 212 .
  • the inquiry manager compares the overall likelihood to predetermined threat and safety thresholds, at a threat classification step 214 . If the overall likelihood is above the threat threshold (“red”), the inquiry manager returns a threat identification to the rule base module, at a reporting step 218 . By the same token, if the overall likelihood is below the safety threshold (“green”), meaning that none of the threat maps has anything in common with the current event, the inquiry manager marks the event as “clean” and returns the control to the rule base module at step 218 .
  • the inquiry manager calls TMSI module 74 , at a pseudo-semantic inquiry step 216 .
  • the TMSI module perform a semantic analysis of the event in order to decide whether it actually is a threat. Details of this step are shown below in FIG. 13 .
  • the inquiry manager returns control (along with the TMSI output) to the rule base module at step 218 .
  • FIG. 11 is a flow chart that schematically illustrates a method for threat identification carried out by TMBI module 72 at step 210 , in accordance with an embodiment of the present invention.
  • the TMBI module loops over all the threat lines in which a higher threat line is null, at a threat map looping step 220 .
  • the TMBI module loops over all of the top threat lines in the threat lines hierarchy.
  • the TMBI module calls the “test row” function, which returns a test result for the current threat line, as well as the test results of all descendant threat lines of that threat line.
  • the test row function is invoked top-down recursively. Details of this step are shown below in FIG. 12 .
  • the TMBI module has the test results of all the threat lines in the current threat map. Based on these test results, the TMBI module calculates the likelihood that the present event constitutes a threat in a given threat map, at a likelihood computation step 222 , using the formula:
  • w j is a predefined weight coefficient for threat line j
  • x j is the test result for this threat line.
  • Each weight coefficient is determined according to the significance of the corresponding threat line it represents. Since not all conditions, upon their fulfillment, indicate the same contribution to likelihood of the existence of a threat, the weighting provides the ability to set a “balanced” threat map than just a binary network of predicates.
  • the TMBI module also outputs a list of “lacks” for the tested threat map, at a lack listing step 224 . This list contains the threat maps having negative test results for the current event.
  • a pseudo-code implementation of steps 220 - 224 is listed in Appendix C.
  • FIG. 12 is a flow chart that schematically shows details of a method for evaluating threat lines that is carried out by TMBI module 72 at step 220 , in accordance with an embodiment of the present invention.
  • Any threat line describes only partially a realization of event. Therefore, the TMBI module tests each given threat line against the current input event, at an event testing step 230 . If there is no relation between the event and the threat line, then the test result of that threat line is set to nil (zero), at a zero setting step 232 .
  • Pseudocode implementing steps 230 and 232 is listed in Appendix D.
  • the TMBI module checks the test model of the threat line in question, at a model testing step 234 .
  • the test model defines how to test the event against the given threat line, but it also defines the meaning. If, for example, the test model is “NE”, it means that the test result is positive (TRUE) if the two compared arguments are not equal. It also means that if the result is FALSE, i.e., the two compared arguments are equal, then this outcome refutes the entire branch of the threat map. In other words, during a threat inquiry, the realization of a refuting threat line dismisses its entire sub-tree of threat lines.
  • step 234 determines whether the test model can refute the relationship between the input event and the threat line. If the relation cannot be refuted, then the TMBI module assigns a test result to this threat line that is equal to the test weight of the threat line, at a weight assigning step 236 .
  • the TMBI module assigns a nil value to the test results of all threat lines in the tree below the tested threat line, at a tree setting step 238 .
  • the hierarchy of the threat lines is designed for the purpose of handling refutation of threat lines: As long as a given threat line is not refuted, the test result of that threat line will not have the effect of dismissing its sub-tree. Dismissal occurs only when the threat line is refuted. Pseudocode implementing this step is listed below in Appendix E.
  • the TMBI module repeats the test and refutation routine described above over all child threat lines, at a child looping step 240 .
  • This step provides test results for all threat lines in the tree, for use in the likelihood calculation at step 222 .
  • FIG. 13 is a flow chart that schematically shows details of a method for pseudo-semantic inquiry carried out by TMSI module 74 at step 216 , in accordance with an embodiment of the present invention.
  • a pseudocode implementation of this method is presented below in Appendix F.
  • the TMSI module searches for maps that are semantically similar to the event under investigation, at a map finding step 250 .
  • the TMSI module collects “rogue maps,” i.e., all the maps for which the computed likelihood of the current event exceeds the above-mentioned safety threshold (referred to as Threshold_GREEN).
  • Threshold_GREEN the above-mentioned safety threshold
  • the map with the highest likelihood is considered to be the best candidate to serve as the basis for building a new map.
  • the TMSI module next computes a hypothetical likelihood for each of the rogue maps using initial parameters, at a likelihood computation step 252 .
  • the hypothetical likelihood is a measure of semantic similarity between an event and a threat map. (A formula for the computation of the semantic similarity is given in Appendix F.)
  • the TMSI module then chooses the rogue maps whose hypothetical likelihoods exceed the threat threshold (Threshold_RED) as candidate maps, at a candidate selection step 254 . According to this criterion, the event that is the subject of the semantic inquiry is classified as a threat on these candidate maps.
  • the TMSI module tests the number of candidate maps that were found, at a candidate checking step 256 . If no such maps were found, the TMSI module updates the parameters of the hypothetical likelihood formula, at a parameter update step 258 . The TMSI module then returns to step 252 in order to re-compute the hypothetical likelihoods, as long as such a parameter update is still possible.
  • Computation step 252 is carried out using the formula:
  • L hyp ⁇ ( i ) L ⁇ ( i ) + ⁇ j ⁇ i ⁇ L ⁇ ( j ) ⁇ ⁇ d i , j d i , j - 1 ⁇ j ⁇ L ⁇ ( j ) ⁇ ( 1 - L ⁇ ( i ) )
  • L(i) is the calculated likelihood of threat map i
  • d i,j is the distance between threat map i and threat map j.
  • the distance between two maps is between zero to one, wherein zero is the closest (semantically similar) and one is the most distant. If there is no such possibility, it means that no threat has been detected, either directly or semantically.
  • the TMSI module then returns control to the inquiry manager (IM), at a termination step 262 .
  • the TMSI module chooses the best candidate map among them, at a map selection step 260 .
  • the best candidate map is the one that has the largest value of hypothetical likelihood for the current event, and then returns control to the IMM at step 262 .
  • the following scenarios provide an example of the operation of program 50 in on-line game protection. These scenarios deals with a common type of cheating, which is classified as “Cheating by Exploiting Lack of Secrecy” in the above-mentioned article by Yan and Randell. This method of cheating involves exchange of packets between peers, wherein a participant cheats by inserting, deleting or modifying game events, commands or files that are transmitted over the network.
  • This example is described with reference to an on-line game known as “GunZ—The Duel” (MAIET Entertainment, Korea), but the characteristics of this example are equally applicable to many other games.
  • the main session types of program 50 include game installation session, game/user learning session and protection session for the on-line game. These sessions follow three sub-scenarios of protection: Scenario A—existing complete previous knowledge, Scenario B—existing partial previous knowledge, Scenario C—no previous knowledge exists.
  • the learning processes includes game installation and game/user learning sessions.
  • a game installation session may take place during an installation or an update of the game using a mechanism (such as a daemon) that identifies the installation or update, or by program activation, such as by the installation software itself.
  • the learning session is managed by rule base module 52 .
  • SWL module 76 learns the installation-derived system
  • Game/user learning sessions start with the activation of the online game. These sessions are managed by rule base 52 and carried out by GUL module 78 .
  • Program 50 loads and starts close monitoring of the commands performed by the game and by other processes that are performed on the assets learned during the installation and recorded in TPT 90 . It is possible to filter out in advance certain types of assets that have a low likelihood of being used for cheating (such as video files).
  • Data collection by sensors 104 - 110 which is activated by rule base module 52 and managed by sieve module 100 , includes:
  • GUL module 78 performs a statistical analysis on the data collected over the game sessions and stores the results in status map (SM) 92 .
  • the statistical data collection and analysis are performed on particular environmental variables during the run of the game, such as network utilization, memory performance, and CPU utilization.
  • the GUL module also learns the user's behavior during the run time of the game, including centralism of the game programs (as defined above) with respect to other programs that are normally operated in parallel with the game.
  • the statistical analysis may use methodologies such as histograms, averages and deviations, as explained above.
  • variable X 1 The following are examples of the types of data collected for a given variable (such as variable X 1 ):
  • the GUL module divides the learned data into clusters and deals with each cluster separately.
  • the clustering may relate, for example, to stages during the game, such as “Startup,” “Shutdown,” “Session Load,” and “Session Unload.”
  • the number of clusters is thus defined for each game. For instance, a game may have only one startup cluster and one shutdown cluster, but any number of other clusters in between.
  • Updates may be applied to various parts of knowledge base 66 , such as version update in GL 82 , metadata 88 , SM 92 , TPT 90 and TM 94 .
  • the first protection scenario deals with a situation in which program 50 has complete previous knowledge concerning the assets of the on-line game and their utilization.
  • SWL module 76 and GUL module 78 have completed creation of the appropriate metadata and have populated SM, TPT and TM in knowledge base 66 .
  • the TPT and TM include variables, such as variable X 1 (as in the above example), that are regarded as assets that should be protected against foreign access during the game.
  • sieve module 100 transfers events from sensors 104 - 110 .
  • one of the events is an access to variable X 1 , which is used in the startup of the game by a WIN32 process (and is thus listed in the “startup cluster”).
  • Such an event triggers an investigation by inquiry manager 70 , which then invokes a test by TMBI module 72 .
  • At least one of the threat maps in TM 94 defines a threat comprising initiation of the WIN32 process by a process that is foreign to the game.
  • the TMBI module computes a reasoning score, indicating the likelihood that the threat is real. If at least one likelihood in all the tested threat maps is greater than the danger threshold, rule base 52 determines that a threat has occurred and takes appropriate action, such as notifying the user of computer 22 , and possibly also server 34 and other game participants.
  • Another protection scenario deals with a situation in which program 50 has only partial previous knowledge concerning the assets of the on-line game and their utilization.
  • This sort of scenario may occur when the performance of the SWL and/or GUL module has not been completed or when there is a lack of appropriate metadata or entries in the SM or TPT or a sufficiently reliable TM for the game.
  • the missing information is the replacement of variable X 1 (such as data file name X 1 ) with variable X 2 (also a data file, such as “X 2 .dat”), wherein X 2 is used for startup of the game but is not included in the original TM or TPT.
  • sieve module 100 transfers events from the sensors.
  • the events include, in this case, an access to variable X 2 by the game program during startup. This occurrence may be repeated over a number of sessions. In one of the sessions, another access of X 2 was also identified, several minutes into the session, but in this case the accessing process was a WIN32 process foreign to the game.
  • GUL module 78 From session to session, GUL module 78 records the access to variable X 2 and starts creating a norm for X 2 as part of the learning process. At a certain point the GUL module determines that the metric distance between X 1 and X 2 (as measured by the differences between their locations, names, attributes, process hierarchy, etc.) is small enough to “adopt” X 2 as a legitimate asset of the game. If both X 1 and X 2 are metrically close to each other, and if X 1 is defined as belonging to Cluster A (the cluster of the game startup process), then the GUL module will attribute X 2 to Cluster A (with a certain probability). TMU module 80 will also expand all the relevant threat maps of X 1 to include X 2 as well.
  • the threat map may define a threat as deleting the file, changing its name, its contents or its security definitions.
  • additional maps in TM 94 may indicate that a change in the registry or file security attributes would constitute a threat from the same threat space.
  • TMSI module 74 goes through map by map, adding a hypothetical value to each based on the spread of the neighboring maps, as given by the distances between threat maps. (Each threat map in knowledge base 66 has a well-defined distance from all other threat maps, determined by a known quantification formula.) The denser the neighboring map spread, the higher will be the likelihood associated with the hypothetical addition to the threat maps.
  • two or more threat maps that resemble one other may belong to the same threat space.
  • the pseudo-semantic distance between these maps (which does not necessarily adhere to the definition of Cartesian distance) may be small enough so that they and other, similar maps belonging to the same threat space.
  • inquiry manager module 70 may activate TMSI module 74 to allow for threat mutations and generation of adaptive solutions to developing threats.
  • program 50 may identify anomalies of types that have been predefined within the existing threat space. Examples of such anomalies could include three consecutive deviations from average CPU utilization, each for 30 seconds or more, or network utilization at its maximum value for 90 seconds straight, possibly with 30 seconds of deviant CPU utilization occurring within the 90 seconds.
  • TMBI module 74 then draws upon all the maps that belong to this threat space to perform an investigation of the events that took place around the times of the anomalies. The reasoning and learning functions are performed as in Scenario B. Since the decision thresholds are sensitive to the overall distance to the norm of the variables stored in SM 92 , the same investigation by inquiry manager 70 may give different decisions under different environmental conditions for the same event.
  • the principles of the present invention may similarly be applied in prevention of other types of cheating.
  • the techniques described above may be used, mutatis mutandis, in detection of click fraud, in which a person, automated script, or computer program imitates a legitimate user of a Web browser by clicking on an link on a Web page for the purpose of generating a charge per click without having actual interest in the target of the link.
  • a computer learns normal and abnormal patterns of clicks and generates an alert upon detecting a large volume of anti-normal behavior.
  • the following code implements ranking of assets within a type, at step 140 ( FIG. 4 ):
  • Each special asset type corresponds to a type of assets that was handled at step 140 .

Abstract

A method for preventing cheating by users of client computers running a network game program includes installing a monitoring program, independent of the network game program, on a group of the client computers so as to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program. A message is conveyed over a network to a server from each of at least some of the client computers in the group. The message from each such client computer indicates that the monitoring program has been actuated on the client computer. Responsively to the message, the client computers receive a message from the server at the client computer indicating which ones of the client computers have actuated the monitoring program.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation-in-part of U.S. patent application Ser. No. 11/850,223, filed Sep. 5, 2007, which claims the benefit of U.S. Provisional Patent Application 60/842,653, filed Sep. 5, 2006. Both of these related applications are incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates generally to computer systems and software, and specifically to detection of cheating in on-line interactions, such as games.
  • BACKGROUND OF THE INVENTION
  • Cheating is defined as an act of lying, deception, fraud, trickery, imposture, or imposition. Cheating is typically employed to create an unfair advantage, often at the expense of others. Fraud is a particular type of cheating, in which a victim is illegally deceived for the personal gain of the perpetrator.
  • Cheating is rampant in on-line games, due to the relatively poor security of most game programs and the permissive atmosphere created by the mutual anonymity of participants in Internet-based games. A wide variety of forms of cheating has developed, as surveyed, for example, by Yan and Randell in “A Systematic Classification of Cheating in Online Games,” Proceedings of the Fourth ACM SIGCOMM workshop on Network and System Support for Games (NetGames '05, Hawthorne, N.Y., 2005), which is incorporated herein by reference. Even when there is no financial stake in the game, a cheater can detract from the experience of other participants and, in some cases, may pose a threat to the secure operation of their computers.
  • Various techniques are known in the art for detection of cheating and assisting participants in distinguishing between cheaters and trustworthy players. For example, U.S. Patent Application Publication 2007/0149279, whose disclosure is incorporated herein by reference, describes an architecture for mitigating and detecting cheating in peer-to-peer (P2P) gaming, using a combination of per-packet access authentication, moving-coordinator, and cheat detection mechanisms.
  • As another example, U.S. Patent Application Publication 2007/0276521, whose disclosure is incorporated herein by reference, describes a method for maintenance of “community integrity” in a gaming network, in which devices interacting with a particular game are monitored. Indicia of the violation of certain rules that define fair game play may be identified, and a user and/or device engaged in illicit game play activity may be identified as a result. Other users in the gaming network may be informed of the particular user's previous illicit game activity.
  • European Patent Application EP 1669115 A1, whose disclosure is incorporated herein by reference, describes a system for conducting a game of chance using a communication network. In this system, the players must have credentials with which to identify themselves remotely. If the players do not have these credentials, they must be issued by a certification authority and certification agent. To request credentials, the player downloads a player agent, which communicates with the certification agent using a secure communication protocol and digital certificate.
  • U.S. Pat. No. 7,169,050, whose disclosure is incorporated herein by reference, describes a system and method for prevention of cheating during online gaming in which a first computer system receives information regarding cheaters from a second computer system. Cheaters identified in this manner are prevented from online gaming on the first computer system. A master database of cheaters resides on one or more master servers, which assemble a master list of cheaters aggregated from individual game servers. In this way, once a cheater is banned on one game server, information identifying the cheater is transmitted to the master databases of the master servers for distribution to the other game servers.
  • A number of anti-cheating software packages are currently available for various on-line games. Examples include PunkBuster™, produced by Even Balance Inc. (Spring, Tex.), and GameGuard, produced by INCA Internet Co. (Seoul, Korea).
  • SUMMARY OF THE INVENTION
  • The embodiments of the present invention that are described hereinbelow provide novel methods for detection and prevention of cheating in computer-based applications. In these embodiments, a program installed on a computer learns normal patterns of use of the assets of the computer and, based on the learned patterns, monitors the computer to detect events that may be indicative of cheating. Such cheating may include both deviant behavior by the user of the computer itself and attempts to compromise the computer carried out by users of other computers. The program implements generic methods of learning and analysis, which are not limited to a specific game or other application.
  • In some embodiments, the program running on the computer communicates with a server, which monitors the activities of a community of participants. When a member of the community wishes to participate in an on-line game, the server verifies that the computer is being monitored by the program and provides an indication to the other members of the community that the user can be trusted not to cheat. The user may similarly receive an indication whether each of the participants in a game is or is not running the monitoring program, and may thus choose to play only with trusted participants.
  • Although the embodiments described hereinbelow relate specifically to cheating in on-line games, the principles of the present invention may similarly be applied in prevention of other types of cheating, such as click fraud.
  • There is therefore provided, in accordance with an embodiment of the present invention, a method for preventing cheating by users of client computers running a network game program. The method includes installing a monitoring program, independent of the network game program, on a group of the client computers so as to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program. A message is conveyed over a network to a server from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer. Responsively to the message, a communication is received from the server at the client computer indicating which ones of the client computers have actuated the monitoring program.
  • In one embodiment, the method includes displaying on the client computer a list of the client computers have actuated the monitoring program, and receiving from a user of the client computer a selection, based on the list, of participants with whom to join in playing the game program. The monitoring program may be configured so as to permit a user of the client computer to deactuate the monitoring program with respect to the game program, and conveying the message may include informing the server when the monitoring program is deactuated.
  • In some embodiments, the method includes running the monitoring program while playing the game program on the client computer so as to detect an anomalous pattern of utilization of assets on the client computer, which is indicative of a threat of cheating in the network game program, and notifying a user of the client computer of the threat. In one embodiment, the method includes sending a notification of the threat over the network to at least one of the server and others of the client computers. Additionally or alternatively, running the monitoring program includes running the network game program on the client computer while detecting use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and then detecting the anomalous pattern as a deviation from the normal utilization.
  • There is also provided, in accordance with an embodiment of the present invention, a method for preventing cheating by users of computers running a network game program. The method includes installing a monitoring program, independent of the network game program, on the computer. The network game program is run on the computer while detecting use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets. During a session of the network game program, an anomalous utilization pattern of the assets is detected, which is indicative of a threat of cheating in the network game program, and a notification of the threat is output to a user of the computer.
  • In a disclosed embodiment, detecting the use of the assets includes learning the pattern during at least one of installation of the game program and playing of the game program by the user.
  • In some embodiments, detecting the use of the assets includes applying a threat map based on the use of the assets, and detecting the anomalous utilization pattern includes receiving an event associated with one of the assets, and associating the event with the threat map with a likelihood that is greater than a predetermined threshold. Typically, the threat map relates to a first event, and associating the event with the threat map may include receiving a second event that is not in the first threat map, and associating the second event with the threat map by a process of semantic inquiry. The method may include updating the threat map responsively to the semantic inquiry by identifying a plurality of candidate threat maps, computing a respective hypothetical likelihood that the second event is associated with each of the candidate threat maps, and selecting one of the candidate threat maps for update based on the hypothetical likelihood.
  • Typically, running the network game program includes learning the pattern of the normal utilization using the monitoring program autonomously, independently of any identification of the assets by the user.
  • In a disclosed embodiment, detecting the anomalous utilization pattern includes receiving an event indicative of a deviation from the pattern of normal utilization in the use of at least one asset selected from a group of the assets consisting of CPU utilization, network utilization, files and directories.
  • Additionally or alternatively, running the network game program includes calculating a normal centralism of an executable file during the normal utilization of the assets, and wherein detecting the anomalous utilization pattern includes detecting a deviation from the normal centralism.
  • There is additionally provided, in accordance with an embodiment of the present invention, a computer software product for preventing cheating by users of client computers running a network game program, the product including a computer-readable medium in which program instructions are stored, the instructions including a monitoring program for installation on a group of the client computers independently of the network game program, wherein the instructions cause the client computers to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program, and
  • wherein the instructions cause the client computers to convey over a network to a server a message from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer, and responsively to the message, to receive from the server at the client computers a communication indicating which ones of the client computers have actuated the monitoring program.
  • There is further provided, in accordance with an embodiment of the present invention, a computer software product for preventing cheating by users of computers running a network game program, the product including a computer-readable medium in which program instructions are stored, the instructions including a monitoring program for installation on a computer independently of the network game program, wherein the instructions cause the computer, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat to a user of the computer.
  • There is moreover provided, in accordance with an embodiment of the present invention, computing apparatus, including an output device and a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor to detect an anomalous use of an asset of the computing apparatus that is indicative of an attempt to cheat in the game program, and further causes the processor to convey over a network to a server a message indicating that the monitoring program has been actuated on the computing apparatus, and responsively to the message, to receive from the server a communication identifying other computers that have actuated the monitoring program, and to provide to a user of the computing apparatus, via the output device, list of users of the other computers identified by the communication.
  • There is furthermore provided, in accordance with an embodiment of the present invention, computing apparatus, including an output device and a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat via the output device to a user of the computing apparatus.
  • The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic, pictorial illustration of a system for on-line gaming, in accordance with an embodiment of the present invention;
  • FIG. 2 is a block diagram that schematically illustrates elements of computer software for detection of cheating, in accordance with an embodiment of the present invention;
  • FIG. 3 is a flow chart that schematically illustrates a method for learning patterns of asset use by a computer game, in accordance with an embodiment of the present invention;
  • FIG. 4 is a flow chart that schematically illustrates a method for assessing threat potentials, in accordance with an embodiment of the present invention;
  • FIG. 5 is a flow chart that schematically illustrates a method for ranking special assets, in accordance with an embodiment of the present invention;
  • FIG. 6 is a flow chart that schematically illustrates a method for game user learning, in accordance with an embodiment of the present invention;
  • FIG. 7 is a flow chart that schematically illustrates a method for adjusting asset threat potentials, in accordance with an embodiment of the present invention;
  • FIG. 8 is a flow chart that schematically illustrates a method for updating statistical results in game user learning, in accordance with an embodiment of the present invention;
  • FIG. 9 is a flow chart that schematically illustrates a method for computation of centralism of files, in accordance with an embodiment of the present invention;
  • FIG. 10 is a flow chart that schematically illustrates a method for inquiry management, in accordance with an embodiment of the present invention;
  • FIG. 11 is a flow chart that schematically illustrates a method for threat identification, in accordance with an embodiment of the present invention;
  • FIG. 12 is a flow chart that schematically illustrates a method for evaluating threat lines, in accordance with an embodiment of the present invention; and
  • FIG. 13 is a flow chart that schematically illustrates a method for pseudo-semantic inquiry, in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF EMBODIMENTS System Overview
  • FIG. 1 is a schematic, pictorial illustration of a system 20 for on-line gaming, in accordance with an embodiment of the present invention. Multiple participants 24 play a game together using respective client computers 22, which are connected to communicate during the game via a network 26, such as the Internet. Each computer 22 comprises a processor 28 with suitable input and output devices, such as a video monitor 30 and a joystick 32, as well as an interface to network 26. The game in question may be server-based or peer-to-peer: The principles of the present invention, as presented in detail hereinbelow, are not tied to a specific game or architecture. In fact, the anti-cheating program that is described hereinbelow is capable of learning and monitoring multiple games, of various different types, they may be played using a given computer. Although computers 22 are illustrated in FIG. 1 as personal desktop computers, the architecture and methods described hereinbelow are equally applicable to computing devices of other types, such as servers, as well as dedicated game consoles and mobile computing and communication devices.
  • At least some of client computers 22 are linked to a “trust net,” which is coordinated by a server 34. A client program running on each of these computers, as described hereinbelow, communicates with the server before and during the game. The client program informs the server of the identity of the participant who is using the computer by means of a unique identifier (such as a digital signature), and also informs the server of the game that the participant wishes to play.
  • Prior to the game, the client program learns how the game in question uses the assets of the client computer, such as files, computational power, and communication resources. During the game, the client program monitors the use of these assets. Upon detecting an anomalous event, which may be indicative of an attempt to cheat during the game, the client program typically informs both participant 24 and server 34. Such anomalous events may be indicative of either an attempt by another player to cheat against the participant or an attempt to cheat by the participant himself. The server may keep records of anomalous events and the participants who were involved in them in order to assemble a list of known or suspected cheaters.
  • Typically, the client program on computers 22 is itself secured against tampering. For example, the program may be digitally signed, and server 34 may check the digital signature as part of the authentication process before the game. Participant 24 may choose to inactivate the client program at certain times, but in such cases, server 34 will be informed that the client computer in question is not being monitored and is therefore susceptible to cheating.
  • Server 34 may give participants 24 information regarding which other players are currently members of the trust net, i.e., which players have the client program installed and active on their own computers. For example, as shown in FIG. 1, the server may generate a window 38 on a display 36 listing players who are participating in or wish to participate in the game in question. A secure indicator 40, controlled by the server, marks the names of players who are part of the trust net. If a given player has not installed the client program or has turned it off, the secure indicator will not appear next to his or her name. (Players with a history of cheating may also be marked by the server.) Based on the information in window 38, participant 24 may choose to play only with trust net members.
  • Alternatively, participants have the option of playing with players who are not approved by server 34. In this case, the client program will still monitor the client computer and will alert the participant to anomalous events, which will protect the participant against some types of cheating, but without the more comprehensive protection afforded by the trust net.
  • Software Architecture
  • FIG. 2 is a block diagram that schematically illustrates elements of a program 50 for detection of cheating, in accordance with an embodiment of the present invention. Program 50 includes software modules and data structures that are used in learning and monitoring computer 22. The components of program 50 may be downloaded to computer 22 in electronic form, over a network, for example. Alternatively or additionally, these program components may be furnished and/or stored on tangible computer-readable media, such as optical, magnetic, or electronic storage media.
  • Program 50 implements a cognitive engineering architecture, based on the following principles, inter alia:
      • Autonomous solution—The program generally operates without the need for intervention by operators or system engineers in ongoing operation. A sieve module 100, as described further hereinbelow, is capable of dynamically changing the data collection profile and adaptively building the set of assets to be protected.
      • Self-learning—The program learns both new threats and normal behavior of new games. A rule base module 52 manages self-learning that is carried out by a software game learning (SWL) module 76 and by a game user learning (GUL) module 78, which learns normal user behavior.
      • Self-expansion—A threat map semantic inquiry (TMSI) module 74 recognizes variations on known threats and activates a threat map update (TMU) module 80, which builds a new threat pattern. The program also supports distribution of known threats among the members in the trust net via server 34 (FIG. 1), using a trust net module 54 and a threats warden module 56.
  • Rule base module 52 activates backward and forward chain reasoning algorithms to populate and enrich a full knowledge base 66, including preliminary and conclusive information. The rule base module may continually analyze the knowledge base in order to generate one of the following generic decisions with respect to each detected event:
      • Ignore because there is no threat indication;
      • Request additional information (including input from the user) or data processing (backward chain reasoning), in order to reach a conclusion concerning the significance of the event; or
      • Identify the type of threat(s) and react accordingly.
  • Knowledge base 66 is typically divided between private and public knowledge information. The distinction between those two categories of information reflects the access and retrieval permissions for each category: The private part of the knowledge base contains information that was gathered from a specific computational node, while the public part of the knowledge base contains common information provided and maintained centrally, by server 34, for example. Both private and public knowledge bases can share the same concept domain. Consolidation of the information from both the private and public knowledge bases generates the full concept domain.
  • Program 50 supports the following main session types:
      • Game installation session—Provides the program with knowledge about the set of assets of computer 22 that are to be protected and also about the main executable file of the game. The name of this executable file is needed for recognition of the game in future protected game sessions.
      • Game/user learning session—This chain of sessions provides the program with knowledge about the game and user normal profiles. Program 50 typically monitors several sessions of this type in order to be capable of differentiating between normal and abnormal activity and overall state inside the computer.
      • Protection session—Regular game session, processed under observation and protection by program 50.
  • Top-level modules of program 50 include rule base module 52, sieve module 100, a reasoning module 60 and a learning module 62, which interact with knowledge base 66 and a number of subsidiary modules. The components and functions of the program modules are described below:
  • Rule base module 52 manages the overall program state and the other modules. Functions of module 52 include:
      • Communicating with the user via a user interface (UI) module 58.
      • Starting and initial handshake with sieve module 100.
      • Initiating sessions of different types.
      • Communicating with sieve module 100 in order to change its profile according to the current session type.
      • Communicating with sieve module 100 in order to get updates about the current session state.
      • Declaring detection of anomalies in user activity.
      • Declaring detection of anomalies in overall machine state.
      • Declaring recognition of threats on the basis of previously known activity, corruption of protected game assets, or hampering of normal user activity within a game session.
      • Activation of checking of current activity for semantic proximity to existing threat patterns.
      • Activation of adaptively building the normal activity profile for each user and each game.
      • Activation of adaptively building the mechanisms for recognition of specific (known) threats.
      • Auditing knowledge base 66.
  • Rule base module 52 manages the following main processes: sieve module 100; reasoning module 60 (including an inquiry manager 70, a threat map-based identification (TMBI) module 72, and TMSI module 74); learning module 62 (including SWL module 76, GUL module 78 and TMU module 80); and sieve module 22. Module 52 uses information that was gathered during the activity sessions, which is stored in a metadata table 88.
  • Sieve module 100 manages data collection processes using sensor modules 104, 106, 108, 110. Functions of the sieve module include:
      • Monitoring sessions concerned with protected games.
      • Collecting data and activating procedures for data storage in knowledge base 66.
      • Dynamically changing data collection profiles according to process information and sets of assets to be protected.
      • Transferring event data to a Threat Potential Table (TPT) 90 for rough filtering in order to recognize suspicious events.
      • Communicating with rule base 52 in order to synchronize data collection and routing of process information.
  • The sieve module serves as database feeder, configuration manager and session manager. As database feeder, the sieve module converts information from a string representation that is obtained from sensors 104, 106, 108, 110 to a database representation. The database feeder may use a flexible algorithm, based on external scripts, to enable the knowledge base architecture to be updated. The database feeder typically receives input in the form of strings, containing name-value pairs separated by commas. The scripts translate input fields or expressions based on input fields into database rows. The configuration manager drops irrelevant sensor input. It may also use a flexible algorithm, based on external predicate scripts, which may be specified in an XML file. The session manager separates sessions and may divide sessions into clusters. The session manager encapsulates session-related information and provides this information to other modules. The database feeder uses this session information in order to fill in corresponding fields in log data records.
  • Trust net module 54 supports communication with server 34, as noted above. This module performs the following functions:
      • Reporting to the server on the status of monitoring activities on the computer.
      • Receiving information regarding the other players in the network, particularly those who have activated the anti-cheat program on their computers, as shown above in FIG. 1.
      • Sending information collected by threats warden 56 to inform server 34 of threat activities.
      • Receiving on-line updates of program 50.
  • Threats warden module 56, as noted above, collects information on computer 22 regarding local activities in order to inform server 34 of possible cheating.
  • User interface module 58 permits interaction between participant 24 and program 50. The main functions that this module include:
      • Handling user inputs.
      • Presenting the activity status of the anti-cheat system, including window 38 and notification of potential threats. (In some cases, the participant may be asked to classify a new situation, hitherto unknown to program 50, as normal or abnormal.)
      • Communicating with rule base module 52 in order to support user requests.
  • Learning module 62 contain modules 76-80, as mentioned above, which implement the main learning functionalities of program 50:
      • Software learning (SWL) module 76 collects information about game assets and processes to be monitored and protected within subsequent protected-mode session. It builds lists of game assets and processes these lists, including rough filtering, for further threat recognition. (The functions of module 76 are described further hereinbelow with reference to FIGS. 3-5.)
      • Game-user learning (GUL) module 78 collects information about normal user activity during protected sessions and fills in statistical data tables that are used for recognition of abnormal machine states. (The functions of module 78 are described further hereinbelow with reference to FIG. 6-8.)
      • Threat map update (TMU) module 80 updates parameters of existing threat patterns, based on recent user activity, and outputs the updated threat patterns to knowledge base 66.
  • A service algorithms module 64 performs major mathematical computations used by program 50.
  • Reasoning module 60 divides input data by type and activates modules 70-74 in order to apply the appropriate processing:
      • Inquiry manager 70 coordinates the activity of modules 72 and 74, as well as initiating activity of TMU module 80. (The functions of module 70 are described further hereinbelow with reference to FIG. 10.)
      • Threat map-based identification (TMBI) module 72 checks current input data against known threat patterns. Module 72 uses a self-learning algorithm in order to recognize events and situations that are unknown but suspicious. In certain cases it calls module 74. (The functions of module 72 are described further hereinbelow with reference to FIG. 12.)
      • Threat map semantic inquiry (TMSI) module 74 recognizes variations on known threats. Module 74 uses pseudo-semantic analysis in order to detect semantic proximity of the current situation to known threat patterns. (The functions of module 74 are described further hereinbelow with reference to FIG. 13.)
  • Knowledge base 66 serves as the repository of the relevant data enriched by semantic-type meta-information (data-objects-concepts) collected by the modules of program 50, including relations between the objects and concepts. The knowledge base serves the program modules and enables the program to continually learn the features of operation of the protected game software. The adaptive learning properties of program 50 enable the same backbone software to be used to protect both games for which partial prior knowledge exists and games for which no prior knowledge exists at all.
  • The knowledge base contains the following groups of classes:
      • Logs
      • Threat knowledge
      • Environment (reference) knowledge.
        The knowledge base is built on a reference knowledge group, which contains basic knowledge that is available a priori, learned at the vendor labs, and learned on-site. It relates to protected software assets knowledge classes, which describe all types of assets (components) of the protected system. These assets may include files, directories, devices, registry entries and registry keys, inter alia. The asset classes also describe groupings of these assets, such as file types, file extensions, etc.
  • The logs of the knowledge base contain all incoming information, including information generated both by the computer itself and by components of program 50. Information generated by the computer may include, for example, operating system events “as is.” Logs generated by program 50 may include, for example, program parameters or a log of events specific to a particular protected game. The logs typically include user-level and system-level event logs regarding protected software, as well as overall system information. The logs typically use the following knowledge classes:
      • A game log (GL) 82, which contains game-specific event logs. This log is applicable only when the user has configured program 50 to protect against cheating in a specific game.
      • An event table 84, which contains the overall activity log of the computer system.
      • A task manager information (TaskMan Info) table 86, which contains a log of the machine state.
  • GL 82 may include a protected software user-level events log, which contains information on the events that are specific and unique for the software that is being protected. If the protected software is a multi-user online game, for example, then the events can be of the type: “The user N0001 has entered player group G0001,” or “The user N0002 has left the chat room,” or “My current shots-per-second rate is 26.7.” The ontology frame of this class includes:
      • time_stamp (datetime format)
      • event_sequential_number (long integer format)
      • cluster_number (integer)
      • event_code (integer)
      • event_parameters (multiple, indexed)
  • GL 82 may also include a protected software system-level events log class, which contains a detailed journal of system events based on API commands. Examples of such events may include “change process priority,” “delete directory,” “edit file permissions,” and “start process.” The ontology frame of this class includes:
      • time_stamp (date/time format)—the time point at which the event occurred.
      • sequential_number_global (long integer format)—a sequential index throughout a game session.
      • cluster_num (integer)—an index that classifies the event according to stages in the game session.
      • session_num (integer)—a counter of the number of game sessions.
      • seq_num_within_cluster (integer)—a sequential index throughout a played session that is reset at every cluster start.
      • command (reference to the class Commands)—a link to the referenced command.
      • command_parameter1 (reference to the class OpSysInfo)—may refer to a predefined class, such as Files, Processes, Directories, RegistryKeys, etc., or another subject of a command event.
      • command_parameter2 (reference to the class OpSysInfo)—similar to command_parameter1, but defined only for special operating system operations that require two parameters as subjects. For instance, the operation “Rename” requires two command parameters: one holding the old subject name and the second for holding the new subject name.
      • invoking_process (reference to the class Processes)—the process that executes the operation.
      • origin_by_protected_soft (Boolean)—an indication of whether or not the invoking process originated in the game software.
      • OS_operation_name (reference to the class OS_Operations)—enumerated value indicating the nature of the operation.
  • Event table 84 may include an overall system information log knowledge class, which contains a detailed journal of system events based on API commands, similar to those in the GL table. The ontology frame of this class includes:
      • time stamp
      • networking data—continuous data related to the network stream.
      • performance data—continuous data related to the resources of the device (such as CPU utilization, memory cache use, etc.)
      • process description—the above data related to each and every process that is running.
  • The threat knowledge group of classes in the knowledge base typically includes the following classes:
      • Threat potential table (TPT) 90 contains the threat potential of specific system asset uses or situations. It provides a rough filter of suspicious activity
      • The system normal state map (SM) 92 serves as an input table for a rough identification of anomalies.
      • Threat maps (TM) 94 contain all the patterns of threat events, including threat lines and threat elements, which are components of the threat patterns.
        The threat knowledge is used together with a stati normali class, which contains knowledge learned on-site of the behavior of the user and software that is characteristic of clean (threat-free) situations. The combination of these knowledge classes also makes it possible for learning module 62 to automatically learn new threat patterns, acquire new knowledge and enrich the threat knowledge dynamically.
  • TPT 90 contains knowledge about the measure of threat potential of specific elements (structures) of objects or groups of objects or specific situations or ranges of situations. For example, it may contain the threat potential value of an image (executable file) of a process or of a group of APIs, or the threat potential value of a situation in which a specific API is applied to any file in a specific directory. Each instance of this class is a set (collection, un-indexed sequence) of any number of instances of the threat lines class. Since the threat lines that build up the TPT class also build up the threat map (TM) class, the TPT class is a subspace of the TM domain. The TPT class provides a rough representation of the TM class in order to reduce computational cost.
  • The threat lines class defines elemental test conditions. It includes:
      • threat_element (reference to the threat elements class, as explained below).
      • test_value (reference to any relevant value against which the threat element is tested).
      • test_weight (a numerical measure of the significance of a given map line as compared to the rest of the lines of the same map).
      • higher_threat_line (a reference to another line of the same map that is precedent to the current threat line in logical hierarchy).
        The threat lines class is the central tool for defining threats. The data structure of the threat lines class can be used in assembling logical predicates (statements or conditions) in a generic manner, wherein the predicates may refer to any variable in the knowledge base. For example, one threat line could state that the condition x2>y indicates a partial fulfillment of a certain threat, or alternatively, it might indicate the opposite, i.e. that the satisfaction of the condition refutes another predefined threat.
  • The threat elements knowledge class contains the main part of each threat line. A number of different threat lines may contain the same threat element. The frame of a threat element includes:
      • observed_parameter (the parameter tested to define the threat).
      • test_model (describes the test).
  • Threat maps 94 use the threat maps and distances knowledge class, which contains:
      • each map (pattern) of a threat divided into elemental threat lines;
      • the semantic distance (measure of similarity) between any pair of such maps.
  • Sensor modules 104, 106, 108, 110 gather information regarding the current activity and overall machine state of computer 22. In general the sensor modules are small program modules, which perform the following sorts of functions:
      • System-wide operation sensors collect information about all operating system operations, such as file opening, writing to file, process starting and terminating, registry updating, and so on.
      • Machine state sensors collect information about utilization of machine resources, such as CPU, paging file, and so on.
      • Networking sensors collect information about activity on network 26 and network resource utilization.
        The sensors receive as input internal information from the computer operating system and output data in string representation to the database feeder function of sieve module 100. String representation may also contain metadata, as additional input for the database feeder. A TPT feeder 102 in FIG. 2 represents the operation of TPT 90 in loading information from the sensors into knowledge base 66.
  • The specific sensor modules shown in FIG. 2 include the following:
      • Plug sensor modules 104 gather information about specific activity relevant to a specific game.
      • A commands sensor 106 collects information about all operating system operations.
      • A dashboard sensor module 108 collects information about machine resource utilization.
      • A network sensor module 110 collects information about activity on the network and network resources utilization.
    Detailed Operation of Program Modules
  • FIG. 3 is a flow chart that schematically illustrates a method for learning patterns of asset use by a computer game, in accordance with an embodiment of the present invention. The user submits a request, via UI module 58, for program 50 to learn a new game, at a new game selection step 120. In response to this request, the UI module opens a dialog window asking the user to specify the installation file of the game in question, at a file request step 122. The user provides (or browses for) the full path of the game program, at a file provision step 124. The UI module now retrieves the installation file, at a file retrieval step 126, and transfers control to rule base module 52.
  • The rule base module sets up the required configuration and then invokes sieve module 100, at a configuration step 128. The configuration data indicate the operations, processes and parameters to be used by the sieve module in including or excluding data provided by sensor modules 104-110 during installation of the game. For instance, upon installation, there may be “uninteresting” types of assets, which are unlikely to be used in a cheating scheme (such as video and audio files). Events involving these assets can be sieved before storage.
  • Sieve module 100 then logs the data transmitted by the sensors during installation of the game, at a logging step 130. The logged data are typically stored in a temporary memory. Upon completion of the installation, the sieve module returns to the rule base module with either a success or a failure indication. When the logging was successful, the rule base module invokes SWL module 76 to process the logged data, at a SWL invocation step 132. Based on this processing, the SWL module adds new instances of game assets to TPT 90, at a table addition step 134.
  • FIG. 4 is a flow chart that schematically shows details of a method used in assessing threat potentials at step 134, in accordance with an embodiment of the present invention. SWL module 76 loops over various types of assets that have been predefined within the threat model, at an asset type review step 140. For each type of assets, the SWL module ranks each asset found in the log that was generated at step 130. A subroutine implementing an algorithm that may be used at step 140 (written in Visual Basic for Applications (VBA)) is listed below in Appendix A. The ranking function at step 140 is typically determined by a single variable. For example, if CA represents a measure of the number of event occurrences in which a given asset was involved, the ranking function at step 140 is simply given by the value of CA in descending order: Rank(CA)=CA.
  • For special asset types, the SWL module performs an additional ranking process, at a special ranking step 142. (Special assets are those asset types that require special treatment in the ranking process. Examples of special assets include registry keys and folders, having corresponding asset type registry values and files, respectively, which are taken into account in the ranking process.) Details of this step are presented below in FIG. 5. For these special asset types, the ranking function may also be determined by a single variable, but not simply by the number of event occurrences in which a given special asset was involved. Typically, each type of special assets has a corresponding type of assets. The ranking of the special assets is determined in part by the corresponding game assets that they hold.
  • One type of special asset is a file directory. Assume, for example, that a given directory, C:/a/b/d, is at the bottom of a directory tree. In this case, its importance may be determined simply by the number (say X) of the assets that it holds, such as directories and files. For the sake of illustration, assume that the parent directory C:/a/b holds the same number of files as C:/a/b/d (apart from the files that are held in C:/a/b/d), and that C:/a/b does not have any other descendants besides C:/a/b/d. Therefore, C:/a/b holds a total of 2X files. In such a case, the SWL module will rank C:/a/b/d and C:/a/b as having the same importance, because each one of them “is responsible for” holding the same amount of files (X).
  • As an alternative example, assume now that C:/a/b has another subdirectory besides C:/a/b/d, i.e., C:/a/b/d has a sibling C:/a/b/e, which holds 5X files. In this case, the SWL module will assign C:/a/b/e a measure of importance that is five times higher than that of C:/a/b/d. In other words, if the importance of C:/a/b/d is Y, then the importance of C:/a/b/e is 5Y. The cumulative number of files held in C:/a/b is now 7X (X+X+5X), but its importance should still be lower than that of C:/a/b/e. Although the cumulative number of files in a directory (including all files in subdirectories) will always be greater or equal to the number of files in any of its subdirectories, the ranking of the directory takes into account the subdirectory with the maximal number of files over all subdirectories. As a result, the ranking of C:/a/b is 2Y (based on the difference 7X−5X). In other words, because another sibling directory has been added to C:/a/b/d, the ranking of the parent directory C:/a/b is raised in comparison to C:/a/b/d. Appendix B hereinbelow presents a subroutine, written in Visual Basic for Applications (VBA), that implements a ranking algorithm that may be used at step 142.
  • SWL module 76 computes the threat potential of each asset (including special assets) at a threat potential computation step 144. Various formulas may be used to determine the threat potential as a function of rank, as long as the formula returns a valid value, i.e., a probability. Initially, in the absence of prior knowledge about the assets (apart from their existence and possibly their distribution within directories), the SWL module may set the threat potential for each asset to 1 (one), but these threat potentials may subsequently be reduced by GUL module 78 (as described below with reference to FIG. 7). The SWL module then writes the instances of the assets (including special assets) and their respective threat potentials to TPT 90.
  • FIG. 5 is a flow chart that schematically shows details of the method for ranking special assets carried out at step 142, in accordance with an embodiment of the present invention. SWL module 76 loops over all of the special asset types, at a type review step 150. For each type of special assets, the SWL module counts the total number of the corresponding assets, at an asset counting step 152. In relation to directories, for example, as described above, the SWL module counts the total number of files in each directory, down to the bottom of the directory tree. Based on the counts made at step 152, the SWL module then ranks each special asset found in the log, at a ranking step 154. The ranking formula used at this step for a given special asset d is:

  • Rank(d)=α×(d.count−MAX(SUB(d).count)
  • wherein α is a fixed coefficient, MAX(*) is a function that returns the maximum out of a set of numbers, and SUB(*) returns all the descendants at the next generation of a given special asset.
  • FIG. 6 is a flow chart that schematically illustrates a method for game user learning, in accordance with an embodiment of the present invention. A user of computer 22 uses UI module 58 to request that program 50 learn a game, at a learning selection step 160. The user specifies that the learning is to take place while the game is being played, without protection against cheating. The UI module presents a dialog window offering the existing game-user profiles for selection by the user, at a profile presentation step 162. The user selects the desired profile from the list, at a profile selection step 164.
  • Once the user has selected the profile for the desired game, the UI modules retrieves the profile and transfers control to rule base module 52, at a profile retrieval step 166. The rule base module sets up the required configuration and then invokes sieve module 100, at a sieve invocation step 168. The configuration indicates what events the sieve should monitor (as transmitted by sensor modules 104-110) and the processes and parameters the transmitted data should include or exclude. The sieve module transfers the data from the sensor modules to knowledge base 66 until the game ends, or until the user quits the learning process, at a data transfer step 170.
  • When step 170 is completed, the sieve module returns control to rule base module 52, which then invokes GUL module 78, at a GUL invocation step 172. The GUL adds new asset instances and modifies existing instances with respect to the game in question in knowledge base 66. Details of step 172 are shown below in FIG. 7. As part of this step, the GUL module measures metric distances between each pair of assets within each type.
  • FIG. 7 is a flow chart that schematically illustrates a method for adjusting asset threat potentials, carried out by GUL module 78 at step 172, in accordance with an embodiment of the present invention. Based on the data transferred at step 170, the GUL module adds new instances of threatened assets to TPT table 90 and/or modifies existing instances, in a table modification step 180. The algorithm used at step 170 is similar to that presented in FIG. 4, except that the counter CA is now configuration-dependent.
  • For example, if the configuration set by rule base 52 instructs sieve module 100 to transmit command events invoked by any process, then CA becomes two-fold, wherein CA with and CA without respectively represent the number of event occurrences in which a given asset was involved with the game being played and without it. (In cases in which the configuration instructs the sieve module to transmit only command events invoked by the game process itself, CA without will accumulate a null value.) The rank is then given by:

  • Rank(C A)=C A with +C A without −C A with ×C A without.
  • When CA without=0, this expression simply gives the rank as CA with.
  • GUL module 78 creates and modifies statistical results in accordance with statistical requests defined in the knowledge base, at a statistics calculation step 180. Details of this step are shown below in FIG. 8. As part of this step, the GUL module may adjust the threat potentials of the assets from their initial value of 1 to a new value according to the frequency of use of the assets and the stage (cluster) in which each asset is used.
  • The GUL module calculates centralism for each process image (executable) file, at a centralism computation step 184. Details of this step are shown below in FIG. 9. The centralism is determined for each image and each user and provides information on how central the game is to the user and the computer while it is being played. In other words, for each known image file in the system, the centralism indicates how often the file operates while the game is running and what is the time proportion between the image processes and the game process overall. Centralism may be defined separately for the launch phase of the game (“launch centralism”), as opposed to the centralism throughout the game.
  • GUL module 78 returns to rule base 52 all the similar pairs of assets, along with the distances between the assets in the pair, at an asset pairing step 186. The distance is given by the formula:
  • Dist = Mean ( ω Ω , D )
  • wherein
  • Mean ( k , S ) = i k i × S i ,
  • and wherein each element ωiε{right arrow over (ω)} is a predefined weight (scalar) and Ω≡Σωi. Each element Di of {right arrow over (D)} is given by:
  • D i = f i 2 ( X 1 ) - f i 2 ( X 2 ) Max ( f i ( X 1 ) , f i ( X 2 ) )
  • wherein fi(Xj) is a numerical value assigned to a given asset. For instance, fi(Xj) could be the size of a given file Xj, priority of a given process Xj, or any other predefined arithmetic manipulation on numeric attributes associated with the asset, which is either stored in the knowledge base or calculated based on stored values. If the metric distance between two given assets is lower than a predefined threshold, then the assets are considered to be similar for the purposes of analyzing events and assessing threats.
  • FIG. 8 is a flow chart that schematically shows details of the method used for updating statistical results at step 182 in game user learning, in accordance with an embodiment of the present invention. Statistical requests refer to variables with stochastic behavior (for example, CPU utilization, network utilization, in-game variables, etc.) Statistical requests may also apply to some variables that are not stochastic in nature, such as the order of events, which allows for learning patterns in the game software. GUL module 78 processes the statistical requests, as noted above, and returns statistical results to knowledge base 66.
  • To generate the statistical results, GUL module 78 computes and updates the average value of the relevant variable, as well as the corresponding standard deviation and a histogram of the variable. The GUL module updates the average, at an average computation step 190, using the formula:
  • y n = y n - 1 + x n - y n - 1 n
  • wherein y0≡0, yn-1 is the previous sample average, xn is the new sample, and n is the number of samples. The GUL module computes the standard deviation, at a deviation computation step 192, using the formula:
  • σ n = ( n - 1 ) × σ n - 1 2 + ( x n - y n ) ( x n - y n - 1 ) n
  • wherein σ0≡0.
  • GUL module 78 computes the histogram of the variable in question, at a histogram computation step 194. The histogram is defined as having fixed number of bins, but the GUL module may add new extrema (i.e., a new minimum or a new maximum), which will result in changes to the ranges of the bins and thus to recalculation of the bin values.
  • FIG. 9 is a flow chart that schematically shows details of the method used in computation of file centralism at step 184, in accordance with an embodiment of the present invention. GUL module 78 determines the “launch centralism” for each executable file, at a launch centralism computation step 200. The launch centralism depends on the number of processes that are running at the start of a new cluster in the course of running the game program. The GUL module determines the “throughout centralism” of the file, at a throughout centralism computation step 202. This type of centralism is based on the processing time of the executable file in question in comparison with the overall game processing time.
  • The centralism characteristics of the executable files that are learned by the GUL module are subsequently used in detecting exceptions to the user's habits. Anomalous deviations from normal centralism at both the launch and game processing phases have been found to be a good indicator that cheating may be going on.
  • FIG. 10 is a flow chart that schematically illustrates the operation of inquiry manager 70, in accordance with an embodiment of the present invention. As noted earlier, the inquiry manager manages the process of testing an event against threat maps. Upon receiving an event, the inquiry manager actuates TMBI module 72, which loops over all relevant threat maps i and computes the likelihood that the event is relevant to each of the maps, at a likelihood computation step 210. Based on the individual likelihoods, the TMBI module finds the overall likelihood that the current event is a threat, at an overall assessment step 212. These steps may be expressed in pseudocode form as follows:
  • Start routine
    /* step 210*/
    For (i = 1 , ... , Nmaps)
    Likelihood[i] = TMBI(CurrentEvent, ThreatMap[i],
    Normometer(TaskmanagerInfo) ));
    Endfor i;
    /* step 212*/
    LikelihoodOverall = MAX(likelihood[i], i = 1,...,Nmaps);

    In the above code, “Normometer” is a predefined function that computes the average distance to the norm of each of the numeric variables in the knowledge base for which a statistical measurement has been obtained.
  • The inquiry manager compares the overall likelihood to predetermined threat and safety thresholds, at a threat classification step 214. If the overall likelihood is above the threat threshold (“red”), the inquiry manager returns a threat identification to the rule base module, at a reporting step 218. By the same token, if the overall likelihood is below the safety threshold (“green”), meaning that none of the threat maps has anything in common with the current event, the inquiry manager marks the event as “clean” and returns the control to the rule base module at step 218.
  • On the other hand, if TMBI module 72 finds at least one threat map at step 210 that is close enough to the current event to raise suspicion, but not close enough to assign the event to a threat map, the inquiry manager calls TMSI module 74, at a pseudo-semantic inquiry step 216. The TMSI module perform a semantic analysis of the event in order to decide whether it actually is a threat. Details of this step are shown below in FIG. 13. After the TMSI module finishes this analysis, the inquiry manager returns control (along with the TMSI output) to the rule base module at step 218.
  • FIG. 11 is a flow chart that schematically illustrates a method for threat identification carried out by TMBI module 72 at step 210, in accordance with an embodiment of the present invention. For each threat map, the TMBI module loops over all the threat lines in which a higher threat line is null, at a threat map looping step 220. In other words, the TMBI module loops over all of the top threat lines in the threat lines hierarchy. For each such threat line in turn, the TMBI module calls the “test row” function, which returns a test result for the current threat line, as well as the test results of all descendant threat lines of that threat line. For this purpose, the test row function is invoked top-down recursively. Details of this step are shown below in FIG. 12.
  • Thus, at the conclusion of step 220, the TMBI module has the test results of all the threat lines in the current threat map. Based on these test results, the TMBI module calculates the likelihood that the present event constitutes a threat in a given threat map, at a likelihood computation step 222, using the formula:
  • Likelihood = j = 1 N ( w j × x j ) j = 1 N w j .
  • Here wj is a predefined weight coefficient for threat line j, and xj is the test result for this threat line. Each weight coefficient is determined according to the significance of the corresponding threat line it represents. Since not all conditions, upon their fulfillment, indicate the same contribution to likelihood of the existence of a threat, the weighting provides the ability to set a “balanced” threat map than just a binary network of predicates. The TMBI module also outputs a list of “lacks” for the tested threat map, at a lack listing step 224. This list contains the threat maps having negative test results for the current event. A pseudo-code implementation of steps 220-224 is listed in Appendix C.
  • FIG. 12 is a flow chart that schematically shows details of a method for evaluating threat lines that is carried out by TMBI module 72 at step 220, in accordance with an embodiment of the present invention. Any threat line describes only partially a realization of event. Therefore, the TMBI module tests each given threat line against the current input event, at an event testing step 230. If there is no relation between the event and the threat line, then the test result of that threat line is set to nil (zero), at a zero setting step 232. Pseudocode implementing steps 230 and 232 is listed in Appendix D.
  • If the current input event is related to the threat line, then the TMBI module checks the test model of the threat line in question, at a model testing step 234. Not only does the test model define how to test the event against the given threat line, but it also defines the meaning. If, for example, the test model is “NE”, it means that the test result is positive (TRUE) if the two compared arguments are not equal. It also means that if the result is FALSE, i.e., the two compared arguments are equal, then this outcome refutes the entire branch of the threat map. In other words, during a threat inquiry, the realization of a refuting threat line dismisses its entire sub-tree of threat lines. Thus, step 234 determines whether the test model can refute the relationship between the input event and the threat line. If the relation cannot be refuted, then the TMBI module assigns a test result to this threat line that is equal to the test weight of the threat line, at a weight assigning step 236.
  • Otherwise (i.e., if the tested threat line has been realized, meaning that the threat line has been observed on the basis of an input event in the course of attempting to refute the relation), the TMBI module assigns a nil value to the test results of all threat lines in the tree below the tested threat line, at a tree setting step 238. Note that the hierarchy of the threat lines is designed for the purpose of handling refutation of threat lines: As long as a given threat line is not refuted, the test result of that threat line will not have the effect of dismissing its sub-tree. Dismissal occurs only when the threat line is refuted. Pseudocode implementing this step is listed below in Appendix E.
  • On the other hand, after determining at step 234 that the test model cannot refute the relation between the current input event and the threat line under test, and assigning the test weight to the test result at step 236, the TMBI module repeats the test and refutation routine described above over all child threat lines, at a child looping step 240. This step provides test results for all threat lines in the tree, for use in the likelihood calculation at step 222.
  • FIG. 13 is a flow chart that schematically shows details of a method for pseudo-semantic inquiry carried out by TMSI module 74 at step 216, in accordance with an embodiment of the present invention. A pseudocode implementation of this method is presented below in Appendix F. Initially, the TMSI module searches for maps that are semantically similar to the event under investigation, at a map finding step 250. Specifically, the TMSI module collects “rogue maps,” i.e., all the maps for which the computed likelihood of the current event exceeds the above-mentioned safety threshold (referred to as Threshold_GREEN). In general, the map with the highest likelihood is considered to be the best candidate to serve as the basis for building a new map.
  • The TMSI module next computes a hypothetical likelihood for each of the rogue maps using initial parameters, at a likelihood computation step 252. The hypothetical likelihood is a measure of semantic similarity between an event and a threat map. (A formula for the computation of the semantic similarity is given in Appendix F.) The TMSI module then chooses the rogue maps whose hypothetical likelihoods exceed the threat threshold (Threshold_RED) as candidate maps, at a candidate selection step 254. According to this criterion, the event that is the subject of the semantic inquiry is classified as a threat on these candidate maps.
  • The TMSI module tests the number of candidate maps that were found, at a candidate checking step 256. If no such maps were found, the TMSI module updates the parameters of the hypothetical likelihood formula, at a parameter update step 258. The TMSI module then returns to step 252 in order to re-compute the hypothetical likelihoods, as long as such a parameter update is still possible. Computation step 252 is carried out using the formula:
  • L hyp ( i ) = L ( i ) + j i L ( j ) d i , j d i , j - 1 j L ( j ) ( 1 - L ( i ) )
  • Here L(i) is the calculated likelihood of threat map i, and di,j is the distance between threat map i and threat map j. The distance between two maps is between zero to one, wherein zero is the closest (semantically similar) and one is the most distant. If there is no such possibility, it means that no threat has been detected, either directly or semantically. The TMSI module then returns control to the inquiry manager (IM), at a termination step 262.
  • Alternatively, if one or more candidate maps are found for the current event at step 256, the TMSI module chooses the best candidate map among them, at a map selection step 260. As noted earlier, the best candidate map is the one that has the largest value of hypothetical likelihood for the current event, and then returns control to the IMM at step 262.
  • OPERATIONAL EXAMPLE
  • The following scenarios provide an example of the operation of program 50 in on-line game protection. These scenarios deals with a common type of cheating, which is classified as “Cheating by Exploiting Lack of Secrecy” in the above-mentioned article by Yan and Randell. This method of cheating involves exchange of packets between peers, wherein a participant cheats by inserting, deleting or modifying game events, commands or files that are transmitted over the network. The example is described with reference to an on-line game known as “GunZ—The Duel” (MAIET Entertainment, Korea), but the characteristics of this example are equally applicable to many other games.
  • As explained above, the main session types of program 50 include game installation session, game/user learning session and protection session for the on-line game. These sessions follow three sub-scenarios of protection: Scenario A—existing complete previous knowledge, Scenario B—existing partial previous knowledge, Scenario C—no previous knowledge exists.
  • The learning processes includes game installation and game/user learning sessions. A game installation session may take place during an installation or an update of the game using a mechanism (such as a daemon) that identifies the installation or update, or by program activation, such as by the installation software itself. The learning session is managed by rule base module 52. SWL module 76 learns the installation-derived system
  • After installation, game/user learning sessions start with the activation of the online game. These sessions are managed by rule base 52 and carried out by GUL module 78. Program 50 loads and starts close monitoring of the commands performed by the game and by other processes that are performed on the assets learned during the installation and recorded in TPT 90. It is possible to filter out in advance certain types of assets that have a low likelihood of being used for cheating (such as video files). Data collection by sensors 104-110, which is activated by rule base module 52 and managed by sieve module 100, includes:
      • a. The image name (i.e., the executable file that performs a given process).
      • b. The action name (such as deletion, renaming, loading, process creation, attribute changing, etc.)
      • c. The parameter (file, directory, registry key, process, etc.), for instance: “C:\Program Files\Gunz\v74\X1.dat.”
      • d. The command sequence number order of the command in relation to other commands in the context of the game.
  • GUL module 78 performs a statistical analysis on the data collected over the game sessions and stores the results in status map (SM) 92. The statistical data collection and analysis are performed on particular environmental variables during the run of the game, such as network utilization, memory performance, and CPU utilization. The GUL module also learns the user's behavior during the run time of the game, including centralism of the game programs (as defined above) with respect to other programs that are normally operated in parallel with the game. The statistical analysis may use methodologies such as histograms, averages and deviations, as explained above.
  • The following are examples of the types of data collected for a given variable (such as variable X1):
      • Access events belonging to clusters of events.
      • Average of X1 access events per cluster.
      • Sequence number of commands containing X1 access events.
      • Concurrently running programs.
        In addition, the GUL module calculates and updates the aforementioned environmental variables in relation to “normal” environmental variables.
  • The GUL module divides the learned data into clusters and deals with each cluster separately. The clustering may relate, for example, to stages during the game, such as “Startup,” “Shutdown,” “Session Load,” and “Session Unload.” The number of clusters is thus defined for each game. For instance, a game may have only one startup cluster and one shutdown cluster, but any number of other clusters in between.
  • As part of the learning process carried out by the GUL module, new values of variables will be added and existing ones will be updated based on the behavior of the game program. Updates may be applied to various parts of knowledge base 66, such as version update in GL 82, metadata 88, SM 92, TPT 90 and TM 94.
  • The first protection scenario (Scenario A, as mentioned above) deals with a situation in which program 50 has complete previous knowledge concerning the assets of the on-line game and their utilization. In this scenario, in other words, SWL module 76 and GUL module 78 have completed creation of the appropriate metadata and have populated SM, TPT and TM in knowledge base 66. The TPT and TM include variables, such as variable X1 (as in the above example), that are regarded as assets that should be protected against foreign access during the game.
  • During the game, sieve module 100 transfers events from sensors 104-110. We assume, for example, that one of the events is an access to variable X1, which is used in the startup of the game by a WIN32 process (and is thus listed in the “startup cluster”). Such an event triggers an investigation by inquiry manager 70, which then invokes a test by TMBI module 72. At least one of the threat maps in TM 94 defines a threat comprising initiation of the WIN32 process by a process that is foreign to the game. The TMBI module computes a reasoning score, indicating the likelihood that the threat is real. If at least one likelihood in all the tested threat maps is greater than the danger threshold, rule base 52 determines that a threat has occurred and takes appropriate action, such as notifying the user of computer 22, and possibly also server 34 and other game participants.
  • Another protection scenario (Scenario B) deals with a situation in which program 50 has only partial previous knowledge concerning the assets of the on-line game and their utilization. This sort of scenario may occur when the performance of the SWL and/or GUL module has not been completed or when there is a lack of appropriate metadata or entries in the SM or TPT or a sufficiently reliable TM for the game. In this case, let us assume, for example, that the missing information is the replacement of variable X1 (such as data file name X1) with variable X2 (also a data file, such as “X2.dat”), wherein X2 is used for startup of the game but is not included in the original TM or TPT.
  • Again, during the game, sieve module 100 transfers events from the sensors. The events include, in this case, an access to variable X2 by the game program during startup. This occurrence may be repeated over a number of sessions. In one of the sessions, another access of X2 was also identified, several minutes into the session, but in this case the accessing process was a WIN32 process foreign to the game.
  • From session to session, GUL module 78 records the access to variable X2 and starts creating a norm for X2 as part of the learning process. At a certain point the GUL module determines that the metric distance between X1 and X2 (as measured by the differences between their locations, names, attributes, process hierarchy, etc.) is small enough to “adopt” X2 as a legitimate asset of the game. If both X1 and X2 are metrically close to each other, and if X1 is defined as belonging to Cluster A (the cluster of the game startup process), then the GUL module will attribute X2 to Cluster A (with a certain probability). TMU module 80 will also expand all the relevant threat maps of X1 to include X2 as well.
  • Prior to the above learning process, access to X2 would not have been classified as a threat. Subsequently, however, access to X2 will trigger an investigation by inquiry manager 70. The recorded access to variable X2 instead of variable X1 belongs to the startup of the game. If X2 is now accessed while the game is in progress by a foreign WIN32 process, after X2 has been classified as an asset of the game, the inquiry manager will begin an investigation. If X2 replaced X1 identically, then the situation is the same as in Scenario A. On the other hand, if some of the attributes of X2 differ from those of X1, but the related threat map is still partly fulfilled, then TMSI module 74 will complete the threat map for X2 by a pseudo-semantic inquiry.
  • In this inquiry, it is first assumed that there is more than one threat map in TM 94 or alternatively, that the map describing the threat to X1 has several variants, such as additional files in the same “hit zone” or additional actions beyond the one defining the threat. For instance, the threat map may define a threat as deleting the file, changing its name, its contents or its security definitions. Also, additional maps in TM 94 may indicate that a change in the registry or file security attributes would constitute a threat from the same threat space. When there is a partial match of an event to the map describing the attack, but at the same time certain (generally lesser) matches to other maps in the same threat space, TMSI module 74 suggests a hypothetical addition to each map, based on the density of the threat maps around it.
  • If no threat has been identified after invoking TMBI module 72, TMSI module 74 goes through map by map, adding a hypothetical value to each based on the spread of the neighboring maps, as given by the distances between threat maps. (Each threat map in knowledge base 66 has a well-defined distance from all other threat maps, determined by a known quantification formula.) The denser the neighboring map spread, the higher will be the likelihood associated with the hypothetical addition to the threat maps.
  • TMBI module 72 may declare a threat when the likelihood value of at least one threat map has crossed a certain threshold. Alternatively, program 50 may be configured so that a threat will be declared only upon satisfaction of a more complex condition, which takes into account the entirety of the new array of hypothetical likelihoods. This sort of condition can be defined heuristically. For example, assume an event has “passed” the filter of TPT 90 and that there are several maps {M01, . . . , Mn} in knowledge base 66. For each threat map, TMBI module 72 calculates the likelihood that a given event constitutes a threat. Even if none of the individual threat map likelihoods has passed the applicable threshold, it may be that some of the likelihoods have crossed the safety threshold, meaning that the possibility of a threat due to the event in question cannot be entirely ruled out.
  • For example, two or more threat maps that resemble one other may belong to the same threat space. The pseudo-semantic distance between these maps (which does not necessarily adhere to the definition of Cartesian distance) may be small enough so that they and other, similar maps belonging to the same threat space. In such “gray” situations, inquiry manager module 70 may activate TMSI module 74 to allow for threat mutations and generation of adaptive solutions to developing threats.
  • The third protection scenario (Scenario C) deals with a situation in which program 50 has no previous knowledge whatsoever concerning the assets of the online game and their utilization. In this case, it is assumed that there has been no valid run of SWL module 76 or GUL module 78, and there are no usable entries in metadata 88, SM 92, TPT 90 or TM 94. Again, GUL module 78 records actions by the game program and the use of assets during the game. After several sessions, the GUL module creates normal statistical data for each cluster, including parameters populating the modules of knowledge base 66.
  • At a certain point, program 50 may identify anomalies of types that have been predefined within the existing threat space. Examples of such anomalies could include three consecutive deviations from average CPU utilization, each for 30 seconds or more, or network utilization at its maximum value for 90 seconds straight, possibly with 30 seconds of deviant CPU utilization occurring within the 90 seconds. TMBI module 74 then draws upon all the maps that belong to this threat space to perform an investigation of the events that took place around the times of the anomalies. The reasoning and learning functions are performed as in Scenario B. Since the decision thresholds are sensitive to the overall distance to the norm of the variables stored in SM 92, the same investigation by inquiry manager 70 may give different decisions under different environmental conditions for the same event.
  • Although the embodiments described hereinabove relate specifically to cheating in on-line games, the principles of the present invention may similarly be applied in prevention of other types of cheating. For example, the techniques described above may be used, mutatis mutandis, in detection of click fraud, in which a person, automated script, or computer program imitates a legitimate user of a Web browser by clicking on an link on a Web page for the purpose of generating a charge per click without having actual interest in the target of the link. For this purpose, a computer learns normal and abnormal patterns of clicks and generates an alert upon detecting a large volume of anti-normal behavior.
  • It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
  • APPENDIX A VBA Subroutine
  • The following code implements ranking of assets within a type, at step 140 (FIG. 4):
  • foreach (DataRow row in _DBSet.Tables[TableName].Rows)
    {
    if (!(row[“name”] is DBNull))
    if(row[“CountWithout”] is DBNull)
    row[“CountWithout”] = 0;
    row[“Fraction”] = (double)(int)row[“CountWithout”] /
    Math.Pow((double)(int)row[“CountWith”],power);
    temp = Math.Exp(FractionEffect *
    (double)(float)row[“Fraction”]);
    row[“Rank”] = Math.Log((double)(int)row[“CountWith”])
    * temp;
    }
    DataView View = _DBSet.Tables[TableName].DefaultView;
    View.Sort = “Rank desc”;
    foreach (DataRowView tempRowView in View)
    {
    //Wr.WriteLine( );
    for (j = 0; j <
    _DBSet.Tables[TableName].Columns.Count; j++)
    {
    if (!(tempRowView.Row[j] is DBNull))
    {
    Wr.Write(“\t ” +
    tempRowView.Row[j].ToString( ));
    }
    else
    Wr.Write(“\t ”);
    }
    Wr.WriteLine( );
    }
  • APPENDIX B VBA Subroutine
  • The following code implements ranking of assets of a special type, at step 142 (FIG. 4). Each special asset type corresponds to a type of assets that was handled at step 140.
  • if (TableName == “FileNames”)
    {
    string Path1;
    string FolderName;
    string RootDirectory;
    m = 0;
    foreach (DataRowView tempRowView in View)
    {
    //Wr.WriteLine( );
    if (!(tempRowView.Row[“name”] is DBNull) &&
    ((string)tempRowView.Row[“name”] != “”))
    {
    Path1 = (string)tempRowView.Row[“name”];
    FolderName = Path.GetDirectoryName(Path1);
    RootDirectory = Path.GetPathRoot(Path1);
    j = 0;
    while ((FolderName != null) && (FolderName !=
    RootDirectory))
    {
    for (l = 0; l < m; l++)
    {
    for (k = 0; k < 10; k++)
    {
    if (UniqueFolder[l, k] ==
    FolderName)
    {
    Occ_Count[l, k]++;
    goto next_folder;
    }
    }
    }
    UniqueFolder[m, j] = FolderName;
    Occ_Count[m, j] = 1;
    next_folder:
    Path1 = FolderName;
    FolderName =
    Path.GetDirectoryName(Path1);
    j++;
    }
    }
    m++;
    }
    m = 0;
    while (UniqueFolder[m, 9] != null)
    {
    Wr.WriteLine( );
    tabs = 0;
    for (j = 9; j >= 0; −−j)
    if (UniqueFolder[m, j] != null)
    Wr.Write(UniqueFolder[m, j] + “\t”);
    else tabs++;
    for (j = 1; j <= tabs; j++)
    Wr.Write(“\t”);
    Wr.Write (“\t”);
    for (j = 9; j >= 0; −−j)
    if (Occ_Count [m, j] > 0)
    Wr.Write(Occ_Count[m, j] + “\t”);
    m++;
    }
    Wr.WriteLine( );
    }
    m = 0;
    foreach (DataRowView tempRowView in View)
    {
    for (j = 0; j < 10; j++)
    {
    if (j == 0)
    Dir_Rank[m, j] = (1.0F −
    (float)tempRowView[“Fraction”]) *
    (float)(int)tempRowView[“CountWith”] *
    (float)(int)(Occ_Count[m, j]);
    else
    Dir_Rank[m, j] = (1.0F − (float)tempRowView[“Fraction”])
    * (float)(int)tempRowView[“CountWith”] *
    (float)(int)(Occ_Count[m, j] − Occ_Count[m, j − 1]);
    if (Dir_Rank[m, j] > 0.0F)
    {
    Wr.Write(UniqueFolder[m, j] + “\t” +
    Dir_Rank[m, j]);
    Wr.WriteLine( );
    }
    }
    m++;
    }
  • APPENDIX C Pseudo-Code for TMBI
  • The following is a pseudocode listing that implements method shown in steps 220-224 (FIG. 11):
  • { /********* Start TMBI **********/
    For ( x = each ThreatLine in a given ThreatMap)
    {
    if (higher_threat_element equals Null)
    {
    test_result = TestRow(x , true);
    } /* endif */
    } /* endfor */
    SumOfResults = 0.00
    SumOfWeights = 0.00
    Likelihood = 0.00
    For ( x = each ThreatLine )
    {
    if (x.test_result is Null)
    {
    report error;
    exit( );
    }
    else if (x.test_model ≠ “NOT_EQUAL”)
    { SumOfWeights = SumOfWeights +
    x.test_weight ; }
    if (x.test_result == 0.0)
    { append x to LoL; }
    /* Comment: LoL is and array of objects to
    /* hold the List of Lacks
    else if (x.test_result > 0.0)
    {
    if (x.test_model == “NOT_EQUAL”)
    { mark “x is refuting” }
    else
    { SumOfResults = SumOfResults +
    x.test_result ; }
    }
    }
    }
    } /* endfor x */
    if (SumOfWeights > 0.0)
    {
    Likelihood = SumOfResults / SumOfWeights;
    }
    else
    {
    Write “Internal Error: Sum of Weights equals
    zero”;
    Exit (“Internal Error”);
    }
    } /********** Finish TMBI ***********/
  • APPENDIX D Pseudocode for Threat Line Evaluation
  • The following is a pseudocode implementation of the process depicted in steps 230-232:
  • For (each element of the Input table)
    { /* start for-loop 1 */
    if (Comparison1(input_observed_param,
    observed_param)
    && Comparison2(input_value, test_value) )
    {
    test_result = test_weight;
    if (test_result == 0)
    { test_result = −2.0 }
    break;
    }
    } /* end for-loop 1 */
  • APPENDIX E Pseudocode for Zeroing Test Results
  • The following pseudocode implements the process depicted in step 238:
  • For (each threat_line in
    current_row.lower_threat_elements)
    { /* start for-loop 1 */
    if (threat_line.test_result > 0 &&
    threat_line.test_model == “NOT_EQUAL” )
    { CollapseTree(next_threat_line); }
    else
    { threat_line.test_result = TestRow(threat_line,
    TRUE); }
    } /* end for-loop 1 */
  • APPENDIX F Pseudo-Semantic Inquiry
  • The following pseudocode implements the method shown in FIG. 13:
  • /* STEP 250 */
     {
     λ = [initial parameter depending on the project]
     Iter = 0
     Do While(Iter < MaximalNumber Of Iterations)
     {
     For i = 1, . . . , number-of-maps
    { (start loop 1)
    If likelihood[i] > threshold_GREEN
    /* STEP 252 */
    {
    likelihyp [ i ] = likelihood [ i ] + k i ( likelihood [ k ] e dist 2 ( i , k ) ) k i likelihood [ k ] · ( λ - likelihood [ i ] )
    }
    }
    /* STEP 254 */
    If likelihyp[i] >= threshold_RED
    {
    /* STEP 256 */
    Append {i, likelihyp[i]} to {cand_list}
    } (end if)
    } (End loop 1)
    /* STEP 258 */
    If length(cand_list) == 0
    {
    λ = [Iteration formula depending on the project]
    Iter ++
    }
    Else {
    /* STEP 260 */
    imax = −1
    MaximalLikelihood = 0
    For i = 1, . . . , length(cand_list)
    { (start loop 2)
    If (cand.list.likelihyp[i] > MaximalLikelihood)
    {
    imax = i
    MaximalLikelihood = cand.list.likelihyp[i]
    }
    } (end loop 2)
     Store { imax, MaximalLikelihood } to pass to RuleBase
     for building a new ThreatMap.
     break Do-loop
     }
     } /* end do-loop */
     /* STEP 262 */
     } (end module)

Claims (19)

1. A method for preventing cheating by users of client computers running a network game program, the method comprising:
installing a monitoring program, independent of the network game program, on a group of the client computers so as to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program;
conveying over a network to a server a message from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer; and
responsively to the message, receiving from the server at the client computer a communication indicating which ones of the client computers have actuated the monitoring program.
2. The method according to claim 1, and comprising displaying on the client computer a list of the client computers have actuated the monitoring program, and receiving from a user of the client computer a selection, based on the list, of participants with whom to join in playing the game program.
3. The method according to claim 1, wherein the monitoring program is configured so as to permit a user of the client computer to deactuate the monitoring program with respect to the game program, and wherein conveying the message comprises informing the server when the monitoring program is deactuated.
4. The method according to claim 1, and comprising running the monitoring program while playing the game program on the client computer so as to detect an anomalous pattern of utilization of assets on the client computer, which is indicative of a threat of cheating in the network game program, and notifying a user of the client computer of the threat.
5. The method according to claim 4, and comprising sending a notification of the threat over the network to at least one of the server and others of the client computers.
6. The method according to claim 4, wherein running the monitoring program comprises running the network game program on the client computer while detecting use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and then detecting the anomalous pattern as a deviation from the normal utilization.
7. A method for preventing cheating by users of computers running a network game program, the method comprising:
installing a monitoring program, independent of the network game program, on the computer;
running the network game program on the computer while detecting use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets;
during a session of the network game program, detecting an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program; and
outputting a notification of the threat to a user of the computer.
8. The method according to claim 7, wherein detecting the use of the assets comprises learning the pattern during at least one of installation of the game program and playing of the game program by the user.
9. The method according to claim 7, wherein detecting the use of the assets comprises applying a threat map based on the use of the assets, and wherein detecting the anomalous utilization pattern comprises receiving an event associated with one of the assets, and associating the event with the threat map with a likelihood that is greater than a predetermined threshold.
10. The method according to claim 9, wherein the threat map relates to a first event, and wherein associating the event with the threat map comprises receiving a second event that is not in the first threat map, and associating the second event with the threat map by a process of semantic inquiry.
11. The method according to claim 10, and comprising updating the threat map responsively to the semantic inquiry.
12. The method according to claim 11, wherein updating the threat map comprises identifying a plurality of candidate threat maps, computing a respective hypothetical likelihood that the second event is associated with each of the candidate threat maps, and selecting one of the candidate threat maps for update based on the hypothetical likelihood.
13. The method according to claim 7, wherein running the network game program comprises learning the pattern of the normal utilization using the monitoring program autonomously, independently of any identification of the assets by the user.
14. The method according to claim 7, wherein detecting the anomalous utilization pattern comprises receiving an event indicative of a deviation from the pattern of normal utilization in the use of at least one asset selected from a group of the assets consisting of CPU utilization, network utilization, files and directories.
15. The method according to claim 7, wherein running the network game program comprises calculating a normal centralism of an executable file during the normal utilization of the assets, and wherein detecting the anomalous utilization pattern comprises detecting a deviation from the normal centralism.
16. A computer software product for preventing cheating by users of client computers running a network game program, the product comprising a computer-readable medium in which program instructions are stored, the instructions comprising a monitoring program for installation on a group of the client computers independently of the network game program, wherein the instructions cause the client computers to detect, using the monitoring program, an anomalous use of an asset of at least one of the client computers that is indicative of an attempt to cheat in the game program, and
wherein the instructions cause the client computers to convey over a network to a server a message from each of at least some of the client computers in the group, the message from each such client computer indicating that the monitoring program has been actuated on the client computer, and responsively to the message, to receive from the server at the client computers a communication indicating which ones of the client computers have actuated the monitoring program.
17. A computer software product for preventing cheating by users of computers running a network game program, the product comprising a computer-readable medium in which program instructions are stored, the instructions comprising a monitoring program for installation on a computer independently of the network game program, wherein the instructions cause the computer, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat to a user of the computer.
18. Computing apparatus, comprising:
an output device; and
a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor to detect an anomalous use of an asset of the computing apparatus that is indicative of an attempt to cheat in the game program, and further causes the processor to convey over a network to a server a message indicating that the monitoring program has been actuated on the computing apparatus, and responsively to the message, to receive from the server a communication identifying other computers that have actuated the monitoring program, and to provide to a user of the computing apparatus, via the output device, list of users of the other computers identified by the communication.
19. Computing apparatus, comprising:
an output device; and
a processor, which is configured to run a network game program, and to receive installation of a monitoring program independently of the network game program, wherein the monitoring program causes the processor, while running the network game program, to detect use of assets using the monitoring program so as to learn a pattern of normal utilization of the assets, and to detect, during a session of the network game program, an anomalous utilization pattern of the assets, which is indicative of a threat of cheating in the network game program, and to output a notification of the threat via the output device to a user of the computing apparatus.
US12/103,522 2006-09-05 2008-04-15 Prevention of cheating in on-line interaction Abandoned US20080305869A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/103,522 US20080305869A1 (en) 2006-09-05 2008-04-15 Prevention of cheating in on-line interaction

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US84265306P 2006-09-05 2006-09-05
US85022307A 2007-09-05 2007-09-05
US12/103,522 US20080305869A1 (en) 2006-09-05 2008-04-15 Prevention of cheating in on-line interaction

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US85022307A Continuation-In-Part 2006-09-05 2007-09-05

Publications (1)

Publication Number Publication Date
US20080305869A1 true US20080305869A1 (en) 2008-12-11

Family

ID=40096380

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/103,522 Abandoned US20080305869A1 (en) 2006-09-05 2008-04-15 Prevention of cheating in on-line interaction

Country Status (1)

Country Link
US (1) US20080305869A1 (en)

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070218996A1 (en) * 2006-03-20 2007-09-20 Harris Adam P Passive validation of network devices
US20070238528A1 (en) * 2006-03-20 2007-10-11 Harris Adam P Game metrics
US20090111583A1 (en) * 2007-10-31 2009-04-30 Gary Zalewski Systems and method for improving application integrity
WO2011041566A1 (en) * 2009-09-30 2011-04-07 Zynga Game Network Inc. Apparatuses, methods and systems for an engagement-tracking game modifier
US20110294572A1 (en) * 2010-05-25 2011-12-01 Inca Internet Co., Ltd. Method for displaying information about use of hack tool in online game
WO2012060900A1 (en) * 2010-11-02 2012-05-10 Sony Computer Entertainment America Llc Detecting lag switch cheating in game
CN102724182A (en) * 2012-05-30 2012-10-10 北京像素软件科技股份有限公司 Recognition method of abnormal client side
US8529343B2 (en) 2011-07-27 2013-09-10 Cyber Holdings, Inc. Method for monitoring computer programs
US8626710B2 (en) 2006-03-20 2014-01-07 Sony Computer Entertainment America Llc Defining new rules for validation of network devices
WO2014079351A1 (en) * 2012-11-22 2014-05-30 Tencent Technology (Shenzhen) Company Limited Data processing device and method for interaction detection
US20140215392A1 (en) * 2013-01-30 2014-07-31 International Business Machines Corporation Connections identification
US20140274304A1 (en) * 2013-03-13 2014-09-18 Ignite Game Technologies, Inc. Method and apparatus for evaluation of skill level progression and matching of participants in a multi-media interactive environment
US20140359120A1 (en) * 2011-12-30 2014-12-04 Intellectual Discovery Co., Ltd. Method, server, and recording medium for providing lag occurrence abusing prevention service using relay server
US20150119148A1 (en) * 2012-07-06 2015-04-30 Tencent Technology (Shenzhen) Company Limited Identify plug-in of emu class internet game
US20150182863A1 (en) * 2012-11-21 2015-07-02 Cbs Interactive Inc. Automated statistics content preparation
US20150238866A1 (en) * 2014-02-27 2015-08-27 Mohammad Iman Khabazian Anomaly detection for rules-based system
US9174118B1 (en) * 2012-08-20 2015-11-03 Kabum, Inc. System and method for detecting game client modification through script injection
US9626829B2 (en) 2012-11-22 2017-04-18 Tencent Technology (Shenzhen) Company Limited Data processing device and method for interaction detection
US9839838B1 (en) * 2013-05-14 2017-12-12 Take-Two Interactive Software, Inc. System and method for online community management
US9923911B2 (en) * 2015-10-08 2018-03-20 Cisco Technology, Inc. Anomaly detection supporting new application deployments
US9928752B2 (en) * 2011-03-24 2018-03-27 Overstock.Com, Inc. Social choice engine
US9958863B2 (en) 2012-10-31 2018-05-01 General Electric Company Method, system, and device for monitoring operations of a system asset
US20180240136A1 (en) * 2014-04-25 2018-08-23 Mohammad Iman Khabazian Modeling consumer activity
US20190073046A1 (en) * 2017-09-05 2019-03-07 Microsoft Technology Licensing, Llc Identifying an input device
US10279266B2 (en) * 2017-06-19 2019-05-07 International Business Machines Corporation Monitoring game activity to detect a surrogate computer program
US20190291008A1 (en) * 2018-03-21 2019-09-26 Valve Corporation Automatically reducing use of cheat software in an online game environment
US10463971B2 (en) * 2017-12-06 2019-11-05 Activision Publishing, Inc. System and method for validating video gaming data
US10537809B2 (en) 2017-12-06 2020-01-21 Activision Publishing, Inc. System and method for validating video gaming data
US10546262B2 (en) 2012-10-19 2020-01-28 Overstock.Com, Inc. Supply chain management system
US20200129864A1 (en) * 2018-10-31 2020-04-30 International Business Machines Corporation Detecting and identifying improper online game usage
WO2020148448A1 (en) 2019-01-19 2020-07-23 Anybrain, S.A System and method for fraud prevention in esports
US10769219B1 (en) 2013-06-25 2020-09-08 Overstock.Com, Inc. System and method for graphically building weighted search queries
US10810654B1 (en) 2013-05-06 2020-10-20 Overstock.Com, Inc. System and method of mapping product attributes between different schemas
US10853891B2 (en) 2004-06-02 2020-12-01 Overstock.Com, Inc. System and methods for electronic commerce using personal and business networks
US10872350B1 (en) 2013-12-06 2020-12-22 Overstock.Com, Inc. System and method for optimizing online marketing based upon relative advertisement placement
US10896451B1 (en) 2009-03-24 2021-01-19 Overstock.Com, Inc. Point-and-shoot product lister
US10896574B2 (en) 2018-12-31 2021-01-19 Playtika Ltd System and method for outlier detection in gaming
US10970463B2 (en) 2016-05-11 2021-04-06 Overstock.Com, Inc. System and method for optimizing electronic document layouts
US11017631B2 (en) 2019-02-28 2021-05-25 At&T Intellectual Property I, L.P. Method to detect and counteract suspicious activity in an application environment
US11023947B1 (en) 2013-03-15 2021-06-01 Overstock.Com, Inc. Generating product recommendations using a blend of collaborative and content-based data
CN113438250A (en) * 2021-07-06 2021-09-24 上海渠杰信息科技有限公司 Abnormal event processing method and equipment
WO2021207407A1 (en) * 2020-04-07 2021-10-14 Riot Games, Inc. Systems and methods for anti-cheat detection
US11205179B1 (en) 2019-04-26 2021-12-21 Overstock.Com, Inc. System, method, and program product for recognizing and rejecting fraudulent purchase attempts in e-commerce
US11334602B2 (en) * 2016-07-20 2022-05-17 LogsHero Ltd. Methods and systems for alerting based on event classification and for automatic event classification
US11458404B2 (en) * 2020-10-09 2022-10-04 Sony Interactive Entertainment LLC Systems and methods for verifying activity associated with a play of a game
US11463578B1 (en) 2003-12-15 2022-10-04 Overstock.Com, Inc. Method, system and program product for communicating e-commerce content over-the-air to mobile devices
US11475484B1 (en) 2013-08-15 2022-10-18 Overstock.Com, Inc. System and method of personalizing online marketing campaigns
US11514493B1 (en) 2019-03-25 2022-11-29 Overstock.Com, Inc. System and method for conversational commerce online
US11676192B1 (en) 2013-03-15 2023-06-13 Overstock.Com, Inc. Localized sort of ranked product recommendations based on predicted user intent
US11734368B1 (en) 2019-09-26 2023-08-22 Overstock.Com, Inc. System and method for creating a consistent personalized web experience across multiple platforms and channels

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6264557B1 (en) * 1996-12-31 2001-07-24 Walker Digital, Llc Method and apparatus for securing electronic games
US20030226007A1 (en) * 2002-05-30 2003-12-04 Microsoft Corporation Prevention of software tampering
US20040242321A1 (en) * 2003-05-28 2004-12-02 Microsoft Corporation Cheater detection in a multi-player gaming environment
US6965886B2 (en) * 2001-11-01 2005-11-15 Actimize Ltd. System and method for analyzing and utilizing data, by executing complex analytical models in real time
US20050288103A1 (en) * 2004-06-23 2005-12-29 Takuji Konuma Online game irregularity detection method
US20060247038A1 (en) * 2005-04-06 2006-11-02 Valve Corporation Anti-cheat facility for use in a networked game environment
US7162036B2 (en) * 2001-08-06 2007-01-09 Igt Digital identification of unique game characteristics
US7169050B1 (en) * 2002-08-28 2007-01-30 Matthew George Tyler Online gaming cheating prevention system and method
US20070129123A1 (en) * 2005-12-02 2007-06-07 Robert Eryou System and method for game creation
US20070149279A1 (en) * 2005-12-22 2007-06-28 Lucent Technologies Inc. Acorn: providing network-level security in P2P overlay architectures
US7287052B2 (en) * 2002-11-09 2007-10-23 Microsoft Corporation Challenge and response interaction between client and server computing devices
US20070276521A1 (en) * 2006-03-20 2007-11-29 Harris Adam P Maintaining community integrity

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6264557B1 (en) * 1996-12-31 2001-07-24 Walker Digital, Llc Method and apparatus for securing electronic games
US7162036B2 (en) * 2001-08-06 2007-01-09 Igt Digital identification of unique game characteristics
US6965886B2 (en) * 2001-11-01 2005-11-15 Actimize Ltd. System and method for analyzing and utilizing data, by executing complex analytical models in real time
US20030226007A1 (en) * 2002-05-30 2003-12-04 Microsoft Corporation Prevention of software tampering
US7169050B1 (en) * 2002-08-28 2007-01-30 Matthew George Tyler Online gaming cheating prevention system and method
US7287052B2 (en) * 2002-11-09 2007-10-23 Microsoft Corporation Challenge and response interaction between client and server computing devices
US20040242321A1 (en) * 2003-05-28 2004-12-02 Microsoft Corporation Cheater detection in a multi-player gaming environment
US7288027B2 (en) * 2003-05-28 2007-10-30 Microsoft Corporation Cheater detection in a multi-player gaming environment
US20050288103A1 (en) * 2004-06-23 2005-12-29 Takuji Konuma Online game irregularity detection method
US20060247038A1 (en) * 2005-04-06 2006-11-02 Valve Corporation Anti-cheat facility for use in a networked game environment
US20070129123A1 (en) * 2005-12-02 2007-06-07 Robert Eryou System and method for game creation
US20070149279A1 (en) * 2005-12-22 2007-06-28 Lucent Technologies Inc. Acorn: providing network-level security in P2P overlay architectures
US20070276521A1 (en) * 2006-03-20 2007-11-29 Harris Adam P Maintaining community integrity

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Christoph et al. "PunkBuster For Players." Published to the Internet on 03/11/2004. Retrieved online on 06/15/2011. *
Christoph et al. "PunkBuster For Server Administraors." Published to the Internet on 03/11/2004. Retrieved online on 06/15/2011. *
Christoph et al. "Updating PunkBuster with PBSetup." Published to the Internet on 02/07/2006. Retrieved online on 11/29/2012. *

Cited By (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11463578B1 (en) 2003-12-15 2022-10-04 Overstock.Com, Inc. Method, system and program product for communicating e-commerce content over-the-air to mobile devices
US10853891B2 (en) 2004-06-02 2020-12-01 Overstock.Com, Inc. System and methods for electronic commerce using personal and business networks
US8771061B2 (en) 2006-03-20 2014-07-08 Sony Computer Entertainment America Llc Invalidating network devices with illicit peripherals
US20070238528A1 (en) * 2006-03-20 2007-10-11 Harris Adam P Game metrics
US8972364B2 (en) 2006-03-20 2015-03-03 Sony Computer Entertainment America Llc Defining new rules for validation of network devices
US9526990B2 (en) 2006-03-20 2016-12-27 Sony Interactive Entertainment America Llc Managing game metrics and authorizations
US10124260B2 (en) 2006-03-20 2018-11-13 Sony Interactive Entertainment America Llc Invalidating network devices with illicit peripherals
US11077376B2 (en) 2006-03-20 2021-08-03 Sony Interactive Entertainment LLC Managing game metrics and authorizations
US9717992B2 (en) 2006-03-20 2017-08-01 Sony Interactive Entertainment America Llc Invalidating network devices with illicit peripherals
US20070218996A1 (en) * 2006-03-20 2007-09-20 Harris Adam P Passive validation of network devices
US10293262B2 (en) 2006-03-20 2019-05-21 Sony Interactive Entertainment America Llc Managing game metrics and authorizations
US8622837B2 (en) 2006-03-20 2014-01-07 Sony Computer Entertainment America Llc Managing game metrics and authorizations
US8626710B2 (en) 2006-03-20 2014-01-07 Sony Computer Entertainment America Llc Defining new rules for validation of network devices
US8715072B2 (en) 2006-03-20 2014-05-06 Sony Computer Entertainment America Llc Generating rules for maintaining community integrity
US20090111583A1 (en) * 2007-10-31 2009-04-30 Gary Zalewski Systems and method for improving application integrity
US10896451B1 (en) 2009-03-24 2021-01-19 Overstock.Com, Inc. Point-and-shoot product lister
WO2011041566A1 (en) * 2009-09-30 2011-04-07 Zynga Game Network Inc. Apparatuses, methods and systems for an engagement-tracking game modifier
US20120028713A1 (en) * 2009-09-30 2012-02-02 Justin Driemeyer Apparatuses, Methods and Systems for an Engagement-Tracking Game Modifier
US9486708B2 (en) * 2009-09-30 2016-11-08 Zynga Inc. Apparatuses, methods and systems for an engagement-tracking game modifier
US8388441B2 (en) * 2010-05-25 2013-03-05 Inca Internet Co., Ltd. Method for displaying information about use of hack tool in online game
US20110294572A1 (en) * 2010-05-25 2011-12-01 Inca Internet Co., Ltd. Method for displaying information about use of hack tool in online game
CN103429302A (en) * 2010-11-02 2013-12-04 美国索尼电脑娱乐有限责任公司 Detecting lag switch cheating in game
WO2012060900A1 (en) * 2010-11-02 2012-05-10 Sony Computer Entertainment America Llc Detecting lag switch cheating in game
US10092845B2 (en) 2010-11-02 2018-10-09 Sony Interactive Entertainment America Llc Detecting lag switch cheating in game
US9636589B2 (en) 2010-11-02 2017-05-02 Sony Interactive Entertainment America Llc Detecting lag switch cheating in game
US9928752B2 (en) * 2011-03-24 2018-03-27 Overstock.Com, Inc. Social choice engine
US8529343B2 (en) 2011-07-27 2013-09-10 Cyber Holdings, Inc. Method for monitoring computer programs
EP2800309A4 (en) * 2011-12-30 2015-09-02 Intellectual Discovery Co Ltd Method, server, and recording medium for providing lag occurrence abusing prevention service using relay server
US20140359120A1 (en) * 2011-12-30 2014-12-04 Intellectual Discovery Co., Ltd. Method, server, and recording medium for providing lag occurrence abusing prevention service using relay server
CN102724182A (en) * 2012-05-30 2012-10-10 北京像素软件科技股份有限公司 Recognition method of abnormal client side
US9744465B2 (en) * 2012-07-06 2017-08-29 Tencent Technology (Shenzhen) Company Limited Identify plug-in of EMU class internet game
US20150119148A1 (en) * 2012-07-06 2015-04-30 Tencent Technology (Shenzhen) Company Limited Identify plug-in of emu class internet game
US9174118B1 (en) * 2012-08-20 2015-11-03 Kabum, Inc. System and method for detecting game client modification through script injection
US10546262B2 (en) 2012-10-19 2020-01-28 Overstock.Com, Inc. Supply chain management system
US9958863B2 (en) 2012-10-31 2018-05-01 General Electric Company Method, system, and device for monitoring operations of a system asset
US10303162B2 (en) 2012-10-31 2019-05-28 General Electric Company Method, system, and device for monitoring operations of a system asset
US20150182863A1 (en) * 2012-11-21 2015-07-02 Cbs Interactive Inc. Automated statistics content preparation
WO2014079351A1 (en) * 2012-11-22 2014-05-30 Tencent Technology (Shenzhen) Company Limited Data processing device and method for interaction detection
US9626829B2 (en) 2012-11-22 2017-04-18 Tencent Technology (Shenzhen) Company Limited Data processing device and method for interaction detection
US20140215392A1 (en) * 2013-01-30 2014-07-31 International Business Machines Corporation Connections identification
US20140274304A1 (en) * 2013-03-13 2014-09-18 Ignite Game Technologies, Inc. Method and apparatus for evaluation of skill level progression and matching of participants in a multi-media interactive environment
US11023947B1 (en) 2013-03-15 2021-06-01 Overstock.Com, Inc. Generating product recommendations using a blend of collaborative and content-based data
US11676192B1 (en) 2013-03-15 2023-06-13 Overstock.Com, Inc. Localized sort of ranked product recommendations based on predicted user intent
US10810654B1 (en) 2013-05-06 2020-10-20 Overstock.Com, Inc. System and method of mapping product attributes between different schemas
US11631124B1 (en) 2013-05-06 2023-04-18 Overstock.Com, Inc. System and method of mapping product attributes between different schemas
US9901831B2 (en) 2013-05-14 2018-02-27 Take-Two Interactive Software, Inc. System and method for online community management
US9839838B1 (en) * 2013-05-14 2017-12-12 Take-Two Interactive Software, Inc. System and method for online community management
US10769219B1 (en) 2013-06-25 2020-09-08 Overstock.Com, Inc. System and method for graphically building weighted search queries
US11475484B1 (en) 2013-08-15 2022-10-18 Overstock.Com, Inc. System and method of personalizing online marketing campaigns
US11694228B1 (en) 2013-12-06 2023-07-04 Overstock.Com, Inc. System and method for optimizing online marketing based upon relative advertisement placement
US10872350B1 (en) 2013-12-06 2020-12-22 Overstock.Com, Inc. System and method for optimizing online marketing based upon relative advertisement placement
US9694286B2 (en) * 2014-02-27 2017-07-04 Mohammad Iman Khabazian Anomaly detection for rules-based system
US20150238866A1 (en) * 2014-02-27 2015-08-27 Mohammad Iman Khabazian Anomaly detection for rules-based system
US10832262B2 (en) * 2014-04-25 2020-11-10 Mohammad Iman Khabazian Modeling consumer activity
US20180240136A1 (en) * 2014-04-25 2018-08-23 Mohammad Iman Khabazian Modeling consumer activity
US9923911B2 (en) * 2015-10-08 2018-03-20 Cisco Technology, Inc. Anomaly detection supporting new application deployments
US11526653B1 (en) 2016-05-11 2022-12-13 Overstock.Com, Inc. System and method for optimizing electronic document layouts
US10970463B2 (en) 2016-05-11 2021-04-06 Overstock.Com, Inc. System and method for optimizing electronic document layouts
US11334602B2 (en) * 2016-07-20 2022-05-17 LogsHero Ltd. Methods and systems for alerting based on event classification and for automatic event classification
US10279266B2 (en) * 2017-06-19 2019-05-07 International Business Machines Corporation Monitoring game activity to detect a surrogate computer program
US10279267B2 (en) * 2017-06-19 2019-05-07 International Business Machines Corporation Monitoring game activity to detect a surrogate computer program
US10983602B2 (en) * 2017-09-05 2021-04-20 Microsoft Technology Licensing, Llc Identifying an input device
US20190073046A1 (en) * 2017-09-05 2019-03-07 Microsoft Technology Licensing, Llc Identifying an input device
US11117055B2 (en) 2017-12-06 2021-09-14 Activision Publishing, Inc. Systems and methods for validating leaderboard gaming data
US10463971B2 (en) * 2017-12-06 2019-11-05 Activision Publishing, Inc. System and method for validating video gaming data
US10537809B2 (en) 2017-12-06 2020-01-21 Activision Publishing, Inc. System and method for validating video gaming data
US10603593B2 (en) * 2018-03-21 2020-03-31 Valve Corporation Automatically reducing use of cheat software in an online game environment
US20190291008A1 (en) * 2018-03-21 2019-09-26 Valve Corporation Automatically reducing use of cheat software in an online game environment
US11213755B2 (en) 2018-03-21 2022-01-04 Valve Corporation Automatically reducing use of cheat software in an online game environment
US20200129864A1 (en) * 2018-10-31 2020-04-30 International Business Machines Corporation Detecting and identifying improper online game usage
US10896574B2 (en) 2018-12-31 2021-01-19 Playtika Ltd System and method for outlier detection in gaming
EP4242888A2 (en) 2019-01-19 2023-09-13 AnyBrain, S.A System and method for fraud prevention in esports
WO2020148448A1 (en) 2019-01-19 2020-07-23 Anybrain, S.A System and method for fraud prevention in esports
US20220072430A1 (en) * 2019-01-19 2022-03-10 Anybrain, S.A System and method for fraud prevention in esports
US11017631B2 (en) 2019-02-28 2021-05-25 At&T Intellectual Property I, L.P. Method to detect and counteract suspicious activity in an application environment
US11532207B2 (en) 2019-02-28 2022-12-20 At&T Intellectual Property I, L.P. Method to detect and counteract suspicious activity in an application environment
US11514493B1 (en) 2019-03-25 2022-11-29 Overstock.Com, Inc. System and method for conversational commerce online
US11205179B1 (en) 2019-04-26 2021-12-21 Overstock.Com, Inc. System, method, and program product for recognizing and rejecting fraudulent purchase attempts in e-commerce
US11928685B1 (en) 2019-04-26 2024-03-12 Overstock.Com, Inc. System, method, and program product for recognizing and rejecting fraudulent purchase attempts in e-commerce
US11734368B1 (en) 2019-09-26 2023-08-22 Overstock.Com, Inc. System and method for creating a consistent personalized web experience across multiple platforms and channels
WO2021207407A1 (en) * 2020-04-07 2021-10-14 Riot Games, Inc. Systems and methods for anti-cheat detection
US11806628B2 (en) 2020-04-07 2023-11-07 Riot Games, Inc. Systems and methods for anti-cheat detection
US11439911B2 (en) 2020-04-07 2022-09-13 Riot Games, Inc. Systems and methods for anti-cheat detection
US20230020765A1 (en) * 2020-10-09 2023-01-19 Sony Interactive Entertainment LLC Systems and methods for verifying activity associated with a play of a game
US11458404B2 (en) * 2020-10-09 2022-10-04 Sony Interactive Entertainment LLC Systems and methods for verifying activity associated with a play of a game
US11779847B2 (en) * 2020-10-09 2023-10-10 Sony Interactive Entertainment LLC Systems and methods for verifying activity associated with a play of a game
CN113438250A (en) * 2021-07-06 2021-09-24 上海渠杰信息科技有限公司 Abnormal event processing method and equipment

Similar Documents

Publication Publication Date Title
US20080305869A1 (en) Prevention of cheating in on-line interaction
US11089034B2 (en) Systems and methods for behavioral threat detection
US7721336B1 (en) Systems and methods for dynamic detection and prevention of electronic fraud
Su et al. Evil under the sun: Understanding and discovering attacks on ethereum decentralized applications
US8370389B1 (en) Techniques for authenticating users of massive multiplayer online role playing games using adaptive authentication
Benaicha et al. Intrusion detection system using genetic algorithm
CN103065088B (en) Based on the system and method for the ruling detection computations machine security threat of computer user
CN109271780A (en) Method, system and the computer-readable medium of machine learning malware detection model
Xu et al. Deep entity classification: Abusive account detection for online social networks
Ullah et al. A filter-based feature selection model for anomaly-based intrusion detection systems
Ting et al. On the trust and trust modeling for the future fully-connected digital world: A comprehensive study
US20230370491A1 (en) System and method for cyber exploitation path analysis and response using federated networks
Xie et al. You can promote, but you can't hide: large-scale abused app detection in mobile app stores
CN114553596B (en) Multi-dimensional security condition real-time display method and system suitable for network security
Om Kumar et al. Intrusion detection model for IoT using recurrent kernel convolutional neural network
US11153332B2 (en) Systems and methods for behavioral threat detection
Petersen Data mining for network intrusion detection: A comparison of data mining algorithms and an analysis of relevant features for detecting cyber-attacks
Srivastava et al. An effective computational technique for taxonomic position of security vulnerability in software development
Sabhnani et al. KDD Feature Set Complaint Heuristic Rules for R2L Attack Detection.
Kuang et al. DNIDS: A dependable network intrusion detection system using the CSI-KNN algorithm
Han et al. Cheating and detection method in massively multiplayer online role-playing game: systematic literature review
Zhang et al. A multi-criteria detection scheme of collusive fraud organization for reputation aggregation in social networks
KR20190043923A (en) Service server, method and computer for monitoring a data packet by a suspicious user
Manandhar A practical approach to anomaly-based intrusion detection system by outlier mining in network traffic
Yampolskiy Indirect human computer interaction-based biometrics for intrusion detection systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: COGNISAFE LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KONFORTY, SHMUEL;SHIMON, YITZHAK;REEL/FRAME:021374/0533

Effective date: 20080512

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION