CN112206541B - Game plug-in identification method and device, storage medium and computer equipment - Google Patents

Game plug-in identification method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112206541B
CN112206541B CN202011166918.5A CN202011166918A CN112206541B CN 112206541 B CN112206541 B CN 112206541B CN 202011166918 A CN202011166918 A CN 202011166918A CN 112206541 B CN112206541 B CN 112206541B
Authority
CN
China
Prior art keywords
image
target
game
training sample
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011166918.5A
Other languages
Chinese (zh)
Other versions
CN112206541A (en
Inventor
陈汉群
梁兆豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011166918.5A priority Critical patent/CN112206541B/en
Publication of CN112206541A publication Critical patent/CN112206541A/en
Application granted granted Critical
Publication of CN112206541B publication Critical patent/CN112206541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/75Enforcing rules, e.g. detecting foul play or generating lists of cheating players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5586Details of game data or player data management for enforcing rights or rules, e.g. to prevent foul play
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a game plug-in identification method, a device, a storage medium and computer equipment, wherein the method comprises the following steps: obtaining a game picture image containing a target object, labeling the target object in the game picture image, generating a training sample set according to the labeled game picture image, performing feature learning training on the target network model based on the training sample set to obtain a trained target network model, and identifying whether the image to be detected belongs to an externally hung image or not based on the trained target network model. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in image can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in image is improved.

Description

Game plug-in identification method and device, storage medium and computer equipment
Technical Field
The application relates to the technical field of computers, in particular to the technical field of games, and specifically relates to a game plug-in identification method, a device, a storage medium and computer equipment.
Background
Game plug-in is a cheating program that benefits by spoofing or modifying the game. The existence of the game plug-in seriously damages the balance of the game ecological circle and reduces the fairness and playability of the game. How to effectively identify plug-in behaviors has become an important research topic in the industry.
Disclosure of Invention
The embodiment of the application provides a game plug-in identification method, a game plug-in identification device, a storage medium and computer equipment, which enable a trained target network model to have multiple granularity receptive fields by learning and training the characteristics of a target object, thereby improving the identification capability of the model, effectively identifying plug-in images by utilizing the trained target network model and improving the accuracy of identifying the plug-in images.
The embodiment of the application provides a game plug-in identification method, which comprises the following steps:
Acquiring a game picture image containing a target object;
Labeling the target object in the game picture image;
Generating a training sample set according to the noted game picture image;
performing feature learning training on the target network model based on the training sample set to obtain a trained target network model;
and identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
The embodiment of the application also provides a game plug-in identification device, which comprises:
the acquisition module is used for acquiring game picture images containing target objects;
The preprocessing module is used for labeling the target object in the game picture image;
The generation module is used for generating a training sample set according to the noted game picture image;
the training module is used for carrying out feature learning training on the target network model based on the training sample set so as to obtain a trained target network model;
And the identification module is used for identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
Embodiments of the present application also provide a computer readable storage medium storing a computer program adapted to be loaded by a processor to perform the steps in the method for identifying a game plug-in according to any of the embodiments above.
The embodiment of the application also provides a computer device, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the steps in the game plug-in identification method according to any embodiment by calling the computer program stored in the memory.
According to the embodiment of the application, the game picture image containing the target object is obtained, the target object in the game picture image is marked, a training sample set is generated according to the marked game picture image, the target network model is subjected to feature learning training based on the training sample set, so that a trained target network model is obtained, and finally whether the image to be detected belongs to the plug-in image is identified based on the trained target network model. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in images can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in images is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a game plug-in identification method provided in an application embodiment.
Fig. 2 is a flowchart of a method for identifying a plug-in game according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a target network model according to an embodiment of the present application.
Fig. 4 is another flow chart of a method for identifying a plug-in game according to an embodiment of the present application.
FIG. 5 is a flow chart of a method for identifying a plug-in game according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a game plug-in identification device according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a game plug-in identification method, a game plug-in identification device, computer equipment and a storage medium. Specifically, the method for identifying the game plug-in the embodiment of the application can be executed by computer equipment, wherein the computer equipment can be a terminal or a server and other equipment. The terminal may be a terminal device such as a smart phone, a tablet computer, a notebook computer, a touch screen, a game console, a Personal computer (Personal Computer, PC), a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), and the like, and the terminal may further include a client, which may be a game application client, a browser client carrying a game program, or an instant messaging client, and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution services, basic cloud computing services such as big data and an artificial intelligent platform.
Machine learning (MACHINE LEARNING, ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario of a method for identifying a plug-in game according to an embodiment of the present application. The game plug-in identification method is implemented by a computer device as an example, wherein the computer device can be a terminal or a server. The game plug-in recognition method comprises a training stage and a recognition stage of a target network model in the process of being executed by computer equipment. In the training stage, a game picture image containing a target object is obtained, then the target object in the game picture image is marked, a training sample set is generated according to the marked game picture image, and then feature learning training is carried out on the target network model based on the training sample set so as to obtain a trained target network model. In the recognition stage, an image to be detected is obtained, then the image to be detected is processed based on the trained target network model, a recognition result is output by the model, and then whether the image to be detected belongs to the plug-in image or not is judged according to the output recognition result. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in images can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in images is improved.
It should be noted that, the training process of the target network model may be completed at the terminal or at the server. When the training process and the recognition process of the target network model are completed in the server and the trained target network model is required to be used, the image to be detected can be input into the server, after the actual recognition of the server is completed, the recognition result is sent to the terminal, and the terminal judges whether the image to be detected belongs to the plug-in image or not according to the recognition result.
When the training process and the recognition process of the target network model are completed at the terminal and the trained target network model is required to be used, the image to be detected can be input into the terminal, and after the terminal is actually recognized, the terminal judges whether the image to be detected belongs to the plug-in image or not according to the recognition result.
When the training process of the target network model is completed in the server and the identification process of the target network model is completed in the terminal, the image to be detected can be input into the terminal when the trained target network model is needed, and after the terminal is actually identified, the terminal judges whether the image to be detected belongs to the plug-in image or not according to the identification result. Optionally, the trained target network model file (model file) may be transplanted to the terminal, and if the input image to be detected needs to be identified, the image to be detected is input to the trained target network model file (model file), and the identification result may be obtained through calculation.
The following detailed description will be given respectively, and the following description sequence of the embodiments does not limit the specific implementation sequence.
The embodiment of the application provides a game plug-in identification method, which can be executed by a terminal or a server or can be executed by the terminal and the server together; the embodiment of the application is illustrated by taking the game plug-in identification method as an example executed by the terminal.
A method of identifying a game plug-in, comprising: acquiring a game picture image containing a target object; labeling a target object in the game picture image; generating a training sample set according to the marked game picture image; performing feature learning training on the target network model based on the training sample set to obtain a trained target network model; and identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
Referring to fig. 2 to 5, fig. 2, fig. 4 and fig. 5 are schematic flow diagrams of a method for identifying a plug-in game according to an embodiment of the present application, and fig. 3 is a schematic structure diagram of a target network model according to an embodiment of the present application. The specific flow of the game plug-in identification method can be as follows:
Step 101, a game screen image containing a target object is acquired.
For example, a certain amount of image data needs to be acquired before training, wherein the acquired image data is a game picture image including a target object. The target object is an add-on object in the game picture image, such as a box, a ray, an auxiliary box, a distance identifier, an add-on icon and the like in the game picture image. The game picture image containing the target object can be obtained from a preset database storing the plug-in images, and the image data in the database storing the plug-in images is derived from images with plug-in manifestations collected by a user in a historical time period.
And 102, labeling the target object in the game picture image.
Optionally, labeling the target object in the game picture image includes: and marking the real position of the target object and marking the external appearance type of the target object.
For example, the plug-in information displayed by the target object in the game screen image may include a real position of the target object in the game screen image, a plug-in expression type of the target object, an outline of the target object, and the like, and the plug-in information may be determined as a feature point of interest to be labeled. The plug-in performance category may include a box, a ray, an auxiliary box, a function option box, a distance, a plug-in icon, a plug-in scene, and the like. For example, taking the example of a square perspective hanging of a First person shooter game (FPS), game logic records war situation coordinate information of all players, and after the coordinates of the enemy are read by a specified method, the enemy can be marked in a screen by using a square, and the location of the enemy can be grasped to make a pre-judgment First, and further can be made into an automatic aiming auxiliary tool. For example, distance is one of the manifestations of perspective hanging, such as displaying "123m" as indicating that there are other virtual characters (other players) or items outside 123 m.
Before training, labeling of the attention feature points is required to be performed on each game picture image in all acquired image data, namely, labeling of the real position of the target object and labeling of the external hanging performance type of the target object, so that each game picture image after labeling becomes a training sample. The marked game picture image is provided with real position marking information of the target object and plug-in expression type marking information of the target object. For example, the real position labeling information and the external expression category labeling information can be displayed on the game picture image, and the relation between the target object and the real position labeling information and the external expression category labeling information and the specific information content can be stored through a data table.
And 103, generating a training sample set according to the marked game picture image.
For example, labeling the attention feature points of each game picture image in all acquired image data, namely labeling the real position of the target object, and labeling the external appearance type of the target object, wherein all labeled game picture images form a training sample set.
Optionally, before generating the training sample set according to the noted game picture image, the method further includes:
performing data enhancement preprocessing on the marked game picture image based on a data enhancement strategy to generate an extended image;
generating a training sample set according to the marked game picture image, wherein the training sample set comprises:
And generating a training sample set according to the marked game picture image and the expanded image.
Optionally, the data enhancement policy includes at least one of:
the method comprises the steps of carrying out random size adjustment on an image, carrying out random position clipping on the image, carrying out random color matching on the image, carrying out Gaussian filtering smoothing on the image and carrying out filling of a specified size on the image.
For example, since the number of game screen images obtained as training samples is limited, in order to expand the number of training samples and enhance data expression, data enhancement preprocessing may be performed on the game screen images after labeling based on a data enhancement policy before training to generate expanded images, and then a training sample set may be generated according to the game screen images after labeling and the expanded images.
For example, the random size adjustment of the image is to adjust the random size of the marked game picture image, so that images with different sizes can be obtained.
For example, the random position clipping of the image is to clip the random position of the marked game picture image, so as to obtain images of the target object displayed at different positions in the game picture image.
For example, the random color matching of the image is to randomly adjust HSV values of the marked game picture image, so that images with different HSV values can be obtained, wherein H represents hue, S represents saturation, and V represents brightness.
For example, the Gaussian filtering smoothing of the image is to perform Gaussian filtering smoothing on the marked game picture image, so that images with different blurring degrees can be obtained.
For example, filling the image with a specified size is to label the game picture image, and images with different sizes can be obtained.
And 104, performing feature learning training on the target network model based on the training sample set to obtain a trained target network model.
Alternatively, as shown in fig. 4, step 104 may be implemented by steps 1041 to 1045, which specifically includes:
Step 1041, selecting any one of the labeled game picture images in the training sample set as a training sample.
Specifically, in the training process, the marked game picture images are sequentially selected from the training sample set to serve as training samples to be input into the target network model for feature extraction.
For example, one training sample may only include one target object, for example, the training sample is a sample image including a box plug-in, and the actual positions of the box plug-ins in the sample image in different training samples may be different. For example, a training sample may include a plurality of target objects, where the types of plug-in performance that the plurality of target objects belong to may be the same or different. When the types of the plug-in expressions of the plurality of target objects are different, for example, the training samples are sample images which simultaneously comprise a box plug-in, a ray plug-in and a distance plug-in, and the real positions of the target objects in different plug-in types in different training samples in the sample images can be different. For example, the training sample is a positive sample with an externally hung representation.
Step 1042, inputting the training sample into the target network model for feature extraction to obtain a target feature map of the training sample.
Optionally, inputting the training sample into the target network model for feature extraction to obtain a target feature map of the training sample, including:
Inputting a training sample into a target network model for feature extraction so as to extract a plurality of scale features of different receptive fields in the training sample;
And performing up-sampling treatment on a plurality of scale features of different receptive fields in the training sample, and then performing feature fusion to obtain a target feature map of the training sample.
The training sample is input into a target network model for feature extraction, so that a plurality of scale features of different receptive fields in the training sample are extracted. For example, as shown in fig. 3, taking the example of a convolutional neural network (Convolutional Neural Networks, CNN) as the target network model, the target network model may include a plurality of convolutional layers (CONV) and a pooling layer (POOL). Specifically, a training sample is input into CNN for processing to obtain an original Feature Map (Feature Map), and then the Feature Map is processed through a pooling layer and then Feature extraction is performed through a plurality of convolution checks of different sizes, so that scale features under different sensing fields can be obtained. In the convolutional neural network, a receptive field (RECEPTIVE FIELD) is the area size mapped on the input picture by the pixel points of the feature map output by each layer of the convolutional neural network.
The method comprises the steps of carrying out up-sampling treatment on a plurality of scale features of different receptive fields in a training sample, and then carrying out feature fusion to obtain a target feature map of the training sample. As shown in fig. 3, after the scale features under different sensing fields are obtained, up-sampling (UpSampling) is performed on the scale features output by multiple stages in the network, so that the sizes of the scale features under different sensing fields are adjusted to be consistent with the sizes of the original feature images, and then feature fusion is performed on the up-sampled scale features, so that a target feature image of the training sample after feature fusion is obtained.
The purpose of up-sampling is to amplify feature maps corresponding to different scale bits, so that feature maps extracted from different sensing fields can be overlapped on an original feature map.
In the process of constructing the target network model, the fusion of the scale features can enable the model to have receptive fields with multiple granularities, so that the model has better identification capability on large and small target objects.
Step 1043, based on the target feature map of the training sample, obtaining a position offset corresponding to the target position of the target object in the training sample, a support rate of the target position, and a class probability of the plug-in expression class to which the target object belongs.
As shown in fig. 3, after the target feature map is obtained, the target feature map is further subjected to convolution processing, and finally the network outputs 4 position offsets (x, y, w, h), support rates and class probabilities corresponding to each target object in the training sample. Wherein the sum of the class probabilities of all the target objects in each training sample is 1. For example, each class probability represents a probability value of the externally hung representation class to which the corresponding target object detected by the model belongs. For example, the probability that the model detects that the external expression class is a square frame is 0.6, the probability that the external expression class is a ray is 0.3, and the probability that the external expression class is a function option frame is 0.1.
For example, the number of convolved output channels of the target network model may be set to be 4+1+num_classes, and the 4 position offsets, the support rates, and the class probabilities corresponding to each target object are obtained by convolving the target feature map. The position offset is used for calculating the target position of the target object; the support rate represents the support degree of the detected target position of the target object and is used for judging whether the detected target position is accurate or not; num_ classes is used to represent the category probability, and the number of num_ classes is consistent with the total number of the plug-in expressions.
Step 1044, generating a recognition result of the training sample based on the position offset corresponding to the target position of the target object in the training sample, the support rate of the target position, and the class probability of the plug-in expression class to which the target object belongs.
For example, the recognition result of training sample a may be: the target position of the target object is a, and the plug-in expression category of the target object is a square frame.
Step 1045, optimizing the target network model based on the difference between the recognition result of the training sample and the real result of the training sample, so as to obtain the trained target network model.
Optionally, optimizing the target network model based on a difference between the recognition result of the training sample and the real result of the training sample to obtain a trained target network model, including:
calculating the target position of the target object based on the position offset corresponding to the target position of the target object in the training sample;
and optimizing the target network model according to the mean square error between the target position of the target object and the real position of the target object.
Optionally, calculating the target position of the target object based on the position offset corresponding to the target position of the target object in the training sample includes:
calculating the x value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the x value of the central coordinate point of the target position of the target object;
calculating the y value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the y value of the central coordinate point of the target position of the target object;
calculating the width of the target object in the training sample based on the position offset corresponding to the width of the target position of the target object;
Calculating the height of the target object in the training sample based on the position offset corresponding to the height of the target position of the target object;
and determining the target position of the target object according to the x value of the central coordinate point of the target object in the training sample, the y value of the central coordinate point, the width and the height.
For example, based on the prior learning of the position offset (offset) of the target position in the optimization target object, the feature learning task of the target object is optimized, so that the return value loss of the loss function of the target network model converges faster and has better effect. The conversion formula of the target position and the position offset is as follows:
targetw=predictw*eow
targeth=predicth*eoh
Wherein target x represents the x value of the central coordinate point of the target object in the original game picture image; target y represents the y value of the center coordinate point of the target object in the original game picture image; target w represents the width of the target object in the original game screen image; target h represents the high of the target object in the original game screen image; o x represents the position offset corresponding to the x value of the central coordinate point which is output by the reasoning of the target network model; o y represents the position offset corresponding to the y value of the central coordinate point which is output by the reasoning of the target network model; o w represents a position offset corresponding to the width of the inference output of the target network model; o h represents a high corresponding position offset of the inference output of the target network model; predict w and predict h represent the width and height of the a priori box set at training; cells x and y represent the top left corner coordinates of the grid; e is a natural index; Is a Sigmoid function used for mapping the position offset output by the target network model to the target position of the target object in the original image.
Optimizing the offset value of the training offset, optimizing the loss function loss of the model, reducing the total loss optimizing identification result through iterative training, and obtaining the target position of the target object through conversion according to the conversion formula.
And in the training stage, the output position offset is converted and mapped to a target position, then the mean square error of the target position and the real position is calculated as the loss of the prediction function, and training samples are input in batches for iterative training of the target network model so as to reduce the loss and enable the network identification effect to be closest to the real result, so that the optimal target network model is obtained.
Optionally, optimizing the target network model based on a difference between the recognition result of the training sample and the real result of the training sample to obtain a trained target network model, further comprising:
Comparing the class probability of the external appearance class to which the target object belongs with the external appearance class to which the target object truly belongs;
and optimizing the target network model according to the comparison result.
For example, the probability that the model detects the hanging expression class as a box in the training sample a is 0.6, and the probability that the hanging expression class as a function option box is 0.4. And the external appearance category to which the target object truly belongs is a square block, the model is not converged, the weight of each external appearance category in the model is required to be adjusted according to the comparison result, and then a training sample is input again to perform iterative training on the target network model with the modified weight value so as to optimize the target network model.
Step 105, based on the trained target network model, identifying whether the image to be detected belongs to the plug-in image.
Alternatively, as shown in fig. 5, step 105 may be implemented through steps 1051 to 1056, and specifically includes:
in step 1051, an image to be measured is acquired.
Optionally, acquiring the image to be measured includes:
acquiring an image to be detected when a triggering condition is met, wherein the triggering condition comprises at least one of the following:
When the grade upgrading speed of the first game player reaches the preset upgrading speed within the preset time, obtaining a game picture image generated by a game client side when the first game player operates a game;
when the second game player is reported, obtaining a game picture image generated by a game client side when the second game player runs a game;
When the starting path of the game client side started by the third game player is detected to be started by a third party program, obtaining a game picture image generated by the game client side when the third game player runs a game;
And when detecting that the target code segment of the fourth game player running the game client is different from the preset standard code segment, acquiring a game picture image generated by the game client when the fourth game player running the game.
For example, in order to monitor the plug-in behavior, a game manufacturer may actively acquire an image to be measured, or may automatically acquire the image to be measured when a trigger condition is satisfied. For example, by analyzing the game log, when the information in the game log satisfies the trigger condition, an operation of acquiring the image to be measured is triggered. For example, when it is analyzed that the game level of game a that the game player is running in a short time is fast, such as that the game a normally takes a week from level 5 to level 10, and the game player is ascending from level 5 to level 10 within 1 day, a game picture image generated by the game client when the game player is running the game a is acquired as an image to be measured.
For example, when the second game player is reported by other game players, a game picture image generated by the game client side when the second game player runs the game is acquired as an image to be measured.
For example, when it is detected that the game client is started by the third party program, the game screen image generated by the game client when the game player plays the game is acquired. For example, when it is detected that the starting path of the game client started by the game player is started by the third party program and the game level upgrading speed meets the preset upgrading speed, a game picture image generated by the game client when the game player runs the game is obtained as the image to be detected.
When detecting that the target code segment of the game client operated by the game player differs from the preset standard code segment, acquiring a game picture image generated by the game client operated by the game player as an image to be detected. The trigger condition is directed to tampering with the plug-in of the game by modifying the game information.
The above listed trigger conditions are not limiting on the embodiments of the present application.
Step 1052, inputting the image to be tested into the trained target network model for feature extraction, so as to extract a plurality of scale features of different receptive fields in the image to be tested.
For example, as shown in fig. 3, an image to be measured is input into CNN for processing to obtain an original Feature Map, and then the original Feature Map is processed by a pooling layer and then Feature extraction is performed by a plurality of convolution check Feature maps with different sizes, so that scale features under different sensing fields can be obtained.
Step 1053, up-sampling the plurality of scale features of different receptive fields in the image to be detected, and then performing feature fusion to obtain a target feature map of the image to be detected.
As shown in fig. 3, after the scale features under different sensing fields are obtained, up-sampling is performed on the scale features output by multiple stages in the network, so that the size of the scale features under different sensing fields is adjusted to be consistent with the size of the original feature map, and then feature fusion is performed on the up-sampled scale features, so that a target feature map of the image to be detected after feature fusion is obtained.
The fusion of the scale features can enable the target network model to have multiple granularity receptive fields, and enable the trained target network model to have good identification capability on large and small target objects.
Step 1054, based on the target feature map of the image to be detected, obtaining the position offset corresponding to the target position of the target object in the image to be detected, the support rate of the target position and the class probability of the plug-in expression class to which the target object belongs.
As shown in fig. 3, after the target feature map is obtained, the target feature map is further subjected to convolution processing, and finally the network outputs 4 position offsets, support rates and class probabilities corresponding to the target object in the image to be detected.
Step 1055, generating a recognition result of the image to be detected based on the position offset corresponding to the target position of the target object in the image to be detected, the support rate of the target position and the class probability of the external expression class to which the target object belongs.
For example, the target position of the target object in the image to be measured is determined based on the detected positional deviation amount, that is, the specific position of the plug-in represented in the image to be measured is detected. The conversion formula of the target position and the position offset is as follows:
targetw=predictw*eow
targeth=predicth*eoh
The target x represents the x value of the central coordinate point of the target object in the image to be detected; target y represents the y value of the central coordinate point of the target object in the image to be measured; target w represents the width of the target object in the image to be measured; target h represents the height of the target object in the image to be measured; o x represents the position offset corresponding to the x value of the central coordinate point which is output by the reasoning of the target network model; o y represents the position offset corresponding to the y value of the central coordinate point which is output by the reasoning of the target network model; o w represents a position offset corresponding to the width of the inference output of the target network model; o h represents a high corresponding position offset of the inference output of the target network model; predict w and predict h represent the width and height of the a priori box set at training; cells x and y represent the top left corner coordinates of the grid; e is a natural index; Is a Sigmoid function used for mapping the position offset output by the target network model to the target position of the target object in the image to be detected.
In addition, based on the class probability of the plug-in expression class to which the target object belongs, whether the target object belongs to the plug-in and which plug-in expression class specifically belongs to are judged.
Step 1056, based on the identification result of the image to be detected, it is determined whether the image to be detected belongs to the plug-in image.
When detecting that the image to be detected has a plurality of target hanging expressions, namely, when detecting that the image to be detected has a plurality of target objects, the method can balance various expression coefficients based on actual application, and calculate dual-strategy output with high accuracy and high recall rate. For example, the calculation may be performed as a policy function:
Wherein, Is a function of two strategies; w i represents the weight of each type of plug-in performance class; i represents various types of plug-in performance; z represents the space of all plug-in expression categories and corresponding weights; p i represents the class probability of detecting the ith plug-in performance class; p i wi represents the class probability of detecting the ith plug-in expression class and weighting; p (X-P) represents the total support rate of the image to be detected as the plug-in, wherein X is the image to be detected, and P is the result of the inference output of the target network model.
The high-accuracy function and the high-recall function are comprehensively applied, so that the accuracy can be improved for automatic punishment, and the recall is improved for more comprehensive audit and attack on plug-in.
The specific strategy function corresponding to the high-accuracy function can be expressed by the following formula:
(1)P(x)=λ+sigmoid(∑(wj*pj))*(1-λ);
(2)L(x)=I(P(x)>threshold1);
Wherein P (x) represents the probability of judging as plug-in; l (x) represents judging whether the image is an external image or not; λ is a base value, λ e (0, 1), j in w j*pj is a target object that holds all p i > =λ, i.e. is a target object whose rejection probability is lower than λ; wherein, I (P (x) > threshold 1) can be simplified to I (t), I (t) is an indication function, I (t) =1 when t is True, I (t) =0 when t is False; threshold is an adjustable decision Threshold, e.g., P (x) >0.5 is plug-in.
The specific strategy function corresponding to the high recall function can be expressed by the following formula:
(3)P(x)=max(wi*pi);
(4)L(x)=I(P(x)>threshold2)。
Wherein P (x) represents the probability of judging as plug-in; l (x) represents judging whether the image is an external image or not; w i represents the weight of each type of plug-in performance class; p i is the class probability of the ith plug-in expression class output by the model; w i*pi represents the class probability of detecting the ith plug-in expression class and weighting; wherein, I (P (x) > threshold 2) can be simplified to I (t), I (t) is an indication function, I (t) =1 when t is True, I (t) =0 when t is False; threshold is an adjustable decision Threshold, e.g., P (x) >0.5 is plug-in.
The high recall function is used for finding out the maximum value of w i*pi, and judging whether the image to be detected belongs to the plug-in image or not.
Optionally, if the image to be measured is identified to belong to the plug-in image, a penalty instruction for penalizing the game account corresponding to the image to be measured is generated. The penalty instruction is used for indicating the game server to penalty the game account corresponding to the image to be tested.
For example, if in the FPS game, for an image to be detected in which the probability of the given recognition result reaches the preset probability threshold, if the image to be detected contains an add-on box and a ray, the target network model recognizes and gives a very high support rate, then automatic penalty is performed on the player corresponding to the image to be detected, such as seal number.
For example, when the probability of the identification result is lower than a preset probability threshold or the given support rate is not high, the image to be detected can be marked as a suspected plug-in image, then the image to be detected marked as the suspected plug-in image is pushed to a manual review window, so that whether the suspected plug-in image is plug-in is further identified through manual review, and if the suspected plug-in image is manually reviewed as the plug-in image, punishment is performed.
In the using process of the target network model, the image identified as having the plug-in representation and the image acquired from other paths and having the new plug-in representation are used as an extended training sample, and the extended training sample is input into the model for training so as to optimize the loss function of the model, so that the model is iterated continuously. For example, the target network model is deployed based on a model service (service) framework, the deployment service monitors the latest model state in the server at regular time, and once the latest model version is monitored to be higher than the current model version, the latest model version is automatically pulled and replaced for deployment, so that the deployment efficiency is improved.
According to the embodiment of the application, differentiation feature extraction is carried out on the attention feature points of the plug-in expression, feature learning is carried out on the attention feature points, namely, feature learning is carried out on the position of the target object and the type of the plug-in expression, and classification recognition is carried out by merging all attention feature points, so that the purpose of plug-in detection is achieved. The embodiment of the application can be applied to games such as FPS, large-scale multiplayer online (MASSIVELY MULTIPLAYER ONLINE, MMO) and the like, and can accurately detect and identify the external hanging performance with certain rendering and presentation, specifically, can detect and identify various external hanging performances in a targeted manner, comprehensively judge whether to use the external hanging, and further can carry out impact punishment on external hanging users and workshops by assisting an automatic punishment means.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
According to the embodiment of the application, the game picture image containing the target object is obtained, the target object in the game picture image is marked, a training sample set is generated according to the marked game picture image, the target network model is subjected to feature learning training based on the training sample set, so that a trained target network model is obtained, and finally whether the image to be detected belongs to the plug-in image is identified based on the trained target network model. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in images can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in images is improved.
In order to facilitate better implementation of the game plug-in identification method of the embodiment of the application, the embodiment of the application also provides a game plug-in identification device. Referring to fig. 6, fig. 6 is a schematic structural diagram of a game plug-in identification device according to an embodiment of the application. Wherein, the game plug-in identification device 300 may include:
an acquisition module 301 for acquiring a game screen image containing a target object;
the preprocessing module 302 is used for labeling a target object in the game picture image;
A generating module 303, configured to generate a training sample set according to the annotated game picture image;
The training module 304 is configured to perform feature learning training on the target network model based on the training sample set, so as to obtain a trained target network model;
The identifying module 305 is configured to identify whether the image to be tested belongs to the plug-in image based on the trained target network model.
Optionally, the preprocessing module 302 is configured to annotate a real location of the target object, and annotate an external appearance type of the target object.
Optionally, the training module 304 includes:
A selecting unit 3041, configured to select any one of the labeled game picture images in the training sample set as a training sample;
The first extraction unit 3042 is configured to input a training sample into the target network model for feature extraction, so as to obtain a target feature map of the training sample;
the first obtaining unit 3043 is configured to obtain, based on a target feature map of a training sample, a position offset corresponding to a target position of a target object in the training sample, a support rate of the target position, and a class probability of an externally hung expression class to which the target object belongs;
The first generating unit 3044 is configured to generate an identification result of the training sample based on a position offset corresponding to a target position of the target object in the training sample, a support rate of the target position, and a class probability of an externally hung expression class to which the target object belongs;
And an optimizing unit 3045, configured to optimize the target network model based on a difference between the recognition result of the training sample and the real result of the training sample, so as to obtain a trained target network model.
Optionally, the optimizing unit 3045 is configured to:
calculating the target position of the target object based on the position offset corresponding to the target position of the target object in the training sample;
and optimizing the target network model according to the mean square error between the target position of the target object and the real position of the target object.
The first extraction unit 3042 specifically is configured to:
Inputting a training sample into a target network model for feature extraction so as to extract a plurality of scale features of different receptive fields in the training sample;
And performing up-sampling treatment on a plurality of scale features of different receptive fields in the training sample, and then performing feature fusion to obtain a target feature map of the training sample.
Optionally, the optimizing unit 3045 is configured to calculate, based on a position offset corresponding to a target position of the target object in the training sample, the target position of the target object, and specifically includes:
calculating the x value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the x value of the central coordinate point of the target position of the target object;
calculating the y value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the y value of the central coordinate point of the target position of the target object;
calculating the width of the target object in the training sample based on the position offset corresponding to the width of the target position of the target object;
Calculating the height of the target object in the training sample based on the position offset corresponding to the height of the target position of the target object;
and determining the target position of the target object according to the x value of the central coordinate point of the target object in the training sample, the y value of the central coordinate point, the width and the height.
Optionally, the optimizing unit 3045 is further configured to:
Comparing the class probability of the external appearance class to which the target object belongs with the external appearance class to which the target object truly belongs;
and optimizing the target network model according to the comparison result.
Optionally, the preprocessing module 302 is configured to:
marking the true position of the target object in the game picture image;
and classifying the plug-in performance of the target object to determine the plug-in performance category of the target object.
The preprocessing module 302 is further configured to perform data enhancement preprocessing on the annotated game picture image based on a data enhancement policy, so as to generate an extended image;
and the generating module 303 is used for generating a training sample set according to the noted game picture image and the extended image.
Optionally, the data enhancement policy includes at least one of:
the method comprises the steps of carrying out random size adjustment on an image, carrying out random position clipping on the image, carrying out random color matching on the image, carrying out Gaussian filtering smoothing on the image and carrying out filling of a specified size on the image.
Optionally, the identifying module 305 includes:
a second acquiring unit 3051 configured to acquire an image to be measured;
the second extracting unit 3052 is used for inputting the image to be detected into the trained target network model for feature extraction so as to extract a plurality of scale features of different receptive fields in the image to be detected;
the feature fusion unit 3053 is used for carrying out feature fusion after up-sampling treatment on a plurality of scale features of different receptive fields in the image to be detected so as to obtain a target feature map of the image to be detected;
the third obtaining unit 3054 is configured to obtain, based on a target feature map of the image to be measured, a position offset corresponding to a target position of a target object in the image to be measured, a support rate of the target position, and a class probability of an external appearance class to which the target object belongs;
The second generating unit 3055 is configured to generate a recognition result of the image to be detected based on a position offset corresponding to a target position of the target object in the image to be detected, a support rate of the target position, and a class probability of an external expression class to which the target object belongs;
the judging unit 3056 is configured to judge whether the image to be detected belongs to the hanging image based on the recognition result of the image to be detected.
Optionally, the second acquiring unit 3051 is configured to acquire the image to be measured when a triggering condition is satisfied, where the triggering condition includes at least one of the following:
When the grade upgrading speed of the first game player reaches the preset upgrading speed within the preset time, obtaining a game picture image generated by a game client side when the first game player operates a game;
when the second game player is reported, obtaining a game picture image generated by a game client side when the second game player runs a game;
When the starting path of the game client side started by the third game player is detected to be started by a third party program, obtaining a game picture image generated by the game client side when the third game player runs a game;
And when detecting that the target code segment of the fourth game player running the game client is different from the preset standard code segment, acquiring a game picture image generated by the game client when the fourth game player running the game.
Optionally, the identifying module 305 is further configured to generate a penalty instruction for penalizing the game account corresponding to the image to be tested if the image to be tested is identified as belonging to the plug-in image.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
According to the game plug-in identification device 300 provided by the embodiment of the application, the game picture image containing the target object is acquired through the acquisition module 301, the target object in the game picture image is marked by the preprocessing module 302, the generation module 303 generates the training sample set according to the marked game picture image, the training module 304 performs feature learning training on the target network model based on the training sample set to obtain the trained target network model, and finally the identification module 305 identifies whether the image to be detected belongs to the plug-in image or not based on the trained target network model. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in images can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in images is improved.
Correspondingly, the embodiment of the application also provides computer equipment, which can be a terminal or a server, wherein the terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game console, a personal computer, a personal digital assistant and the like. Fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer apparatus 400 includes a processor 401 having one or more processing cores, a memory 402 having one or more computer readable storage media, and a computer program stored on the memory 402 and executable on the processor. The processor 401 is electrically connected to the memory 402. It will be appreciated by those skilled in the art that the computer device structure shown in the figures is not limiting of the computer device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
Processor 401 is a control center of computer device 400 and connects the various portions of the entire computer device 400 using various interfaces and lines to perform various functions of computer device 400 and process data by running or loading software programs and/or modules stored in memory 402 and invoking data stored in memory 402, thereby performing overall monitoring of computer device 400.
In the embodiment of the present application, the processor 401 in the computer device 400 loads the instructions corresponding to the processes of one or more application programs into the memory 402 according to the following steps, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions:
Acquiring a game picture image containing a target object; labeling a target object in the game picture image; generating a training sample set according to the marked game picture image; performing feature learning training on the target network model based on the training sample set to obtain a trained target network model; and identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Optionally, as shown in fig. 7, the computer device 400 further includes: a touch display 403, a radio frequency circuit 404, an audio circuit 405, an input unit 406, and a power supply 407. The processor 401 is electrically connected to the touch display 403, the radio frequency circuit 404, the audio circuit 405, the input unit 406, and the power supply 407, respectively. Those skilled in the art will appreciate that the computer device structure shown in FIG. 7 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components.
The touch display 403 may be used to display a graphical user interface and receive operation instructions generated by a user acting on the graphical user interface. The touch display screen 403 may include a display panel and a touch panel. Wherein the display panel may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. Alternatively, the display panel may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel may be used to collect touch operations on or near the user (such as operations on or near the touch panel by the user using any suitable object or accessory such as a finger, stylus, etc.), and generate corresponding operation instructions, and the operation instructions execute corresponding programs. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into touch point coordinates, and sends the touch point coordinates to the processor 401, and can receive and execute commands sent from the processor 401. The touch panel may overlay the display panel, and upon detection of a touch operation thereon or thereabout, the touch panel is passed to the processor 401 to determine a category of touch event, and the processor 401 then provides a corresponding visual output on the display panel in accordance with the category of touch event. In the embodiment of the present application, the touch panel and the display panel may be integrated into the touch display screen 403 to realize the input and output functions. In some embodiments, however, the touch panel and the touch panel may be implemented as two separate components to perform the input and output functions. I.e. the touch-sensitive display 403 may also implement an input function as part of the input unit 406.
The radio frequency circuitry 404 may be used to transceive radio frequency signals to establish wireless communications with a network device or other computer device via wireless communications.
The audio circuitry 405 may be used to provide an audio interface between a user and a computer device through speakers, microphones, and so on. The audio circuit 405 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 405 and converted into audio data, which are processed by the audio data output processor 401 and sent via the radio frequency circuit 404 to, for example, another computer device, or which are output to the memory 402 for further processing. The audio circuit 405 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
The input unit 406 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint, iris, facial information, etc.), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control.
The power supply 407 is used to power the various components of the computer device 400. Alternatively, the power supply 407 may be logically connected to the processor 401 through a power management system, so as to implement functions of managing charging, discharging, and power consumption management through the power management system. The power supply 407 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown in fig. 7, the computer device 400 may further include a camera, a sensor, a wireless fidelity module, a bluetooth module, etc., which are not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
As can be seen from the above, the computer device provided in this embodiment obtains a game image including a target object, then marks the target object in the game image, then generates a training sample set according to the marked game image, then performs feature learning training on the target network model based on the training sample set, so as to obtain a trained target network model, and finally identifies whether the image to be tested belongs to an plug-in image based on the trained target network model. According to the embodiment of the application, the characteristics of the target object can be learned and trained, so that the trained target network model has multiple granularity receptive fields, the recognition capability of the model is improved, the plug-in images can be effectively recognized by utilizing the trained target network model, and the accuracy of recognition of the plug-in images is improved.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium storing a plurality of computer programs that can be loaded by a processor to perform the steps in any of the game plug-in identification methods provided by the embodiment of the present application. For example, the computer program may perform the steps of:
Acquiring a game picture image containing a target object; labeling a target object in the game picture image; generating a training sample set according to the marked game picture image; performing feature learning training on the target network model based on the training sample set to obtain a trained target network model; and identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The steps in any game plug-in identification method provided by the embodiment of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects of any game plug-in identification method provided by the embodiment of the present application can be achieved, and detailed descriptions of the previous embodiments are omitted.
The above describes in detail a method, a device, a storage medium and a computer device for identifying a game plug-in provided by the embodiments of the present application, and specific examples are applied to describe the principles and implementations of the present application, where the description of the above embodiments is only for helping to understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (13)

1. A method for identifying a game plug-in, the method comprising:
Acquiring a game picture image containing a target object;
Labeling the target objects in the game picture images, wherein the number of the labeled target objects in at least one game picture image is at least one;
Generating a training sample set according to the noted game picture image;
Selecting any one of the marked game picture images in the training sample set as a training sample;
Inputting the training sample into a target network model, and extracting an original feature map;
Performing feature extraction on the original feature map through a convolution kernel to obtain a plurality of scale features of different receptive fields in the training sample, performing up-sampling treatment on the plurality of scale features of different receptive fields in the training sample, and then performing feature fusion to obtain a target feature map of the training sample; the up-sampling process is used for adjusting the sizes of the scale features under different sensing fields to be consistent with the sizes of the original feature images;
Based on a target feature map of the training sample, acquiring a position offset corresponding to a target position of each target object in the training sample, a support rate of the target position and a class probability of an externally hung expression class to which the target object belongs;
Generating a recognition result of each training sample based on a position offset corresponding to a target position of each target object in each training sample, a support rate of the target position and a class probability of an externally hung expression class to which the target object belongs;
Optimizing the target network model based on the difference between the recognition result of the training sample and the real result of the training sample to obtain a trained target network model;
and identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
2. The method for identifying a plug-in to a game of claim 1, wherein the labeling the target object in the game screen image comprises:
and marking the real position of the target object and marking the external appearance type of the target object.
3. The method of claim 1, wherein optimizing the target network model based on a difference between the recognition result of the training sample and the real result of the training sample to obtain the trained target network model comprises:
calculating the target position of the target object based on the position offset corresponding to the target position of the target object in the training sample;
And optimizing the target network model according to the mean square error between the target position of the target object and the real position of the target object.
4. The method for identifying a game plug-in as claimed in claim 3, wherein calculating the target position of the target object based on the position offset corresponding to the target position of the target object in the training sample comprises:
Calculating the x value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the x value of the central coordinate point of the target position of the target object;
calculating the y value of the central coordinate point of the target object in the training sample based on the position offset corresponding to the y value of the central coordinate point of the target position of the target object;
calculating the width of the target object in the training sample based on the position offset corresponding to the width of the target position of the target object;
Calculating the height of the target object in the training sample based on the position offset corresponding to the height of the target position of the target object;
and determining the target position of the target object according to the x value of the central coordinate point of the target object in the training sample, the y value of the central coordinate point, the width and the height.
5. The method for identifying a game plug-in of claim 3, wherein optimizing the target network model based on a difference between the identification result of the training sample and the actual result of the training sample to obtain a trained target network model further comprises:
comparing the class probability of the external appearance class to which the target object belongs with the external appearance class to which the target object truly belongs;
and optimizing the target network model according to the comparison result.
6. The method of claim 1, further comprising, prior to said generating a training sample set from said annotated game screen image:
Performing data enhancement preprocessing on the noted game picture image based on a data enhancement strategy to generate an extended image;
the generating a training sample set according to the noted game picture image comprises the following steps:
And generating a training sample set according to the noted game picture image and the extended image.
7. The game plug-in identification method of claim 6, wherein the data enhancement policy comprises at least one of:
the method comprises the steps of carrying out random size adjustment on an image, carrying out random position clipping on the image, carrying out random color matching on the image, carrying out Gaussian filtering smoothing on the image and carrying out filling of a specified size on the image.
8. The method for identifying a plug-in game according to claim 1, wherein the step of identifying whether the image to be detected belongs to the plug-in image based on the trained target network model comprises:
Acquiring an image to be detected;
Inputting the image to be detected into the trained target network model for feature extraction so as to extract a plurality of scale features of different receptive fields in the image to be detected;
performing up-sampling treatment on a plurality of scale features of different receptive fields in the image to be detected, and then performing feature fusion to obtain a target feature map of the image to be detected;
Based on a target feature map of the image to be detected, acquiring a position offset corresponding to a target position of a target object in the image to be detected, a support rate of the target position and a class probability of an externally hung expression class to which the target object belongs;
Generating an identification result of the image to be detected based on a position offset corresponding to a target position of a target object in the image to be detected, a support rate of the target position and a class probability of an externally hung expression class to which the target object belongs;
And judging whether the image to be detected belongs to an external image or not based on the identification result of the image to be detected.
9. The method for identifying a plug-in to a game of claim 8, wherein the acquiring the image to be tested comprises:
acquiring an image to be detected when a trigger condition is met, wherein the trigger condition comprises at least one of the following:
when the grade upgrading speed of a first game player reaches a preset upgrading speed within a preset time, obtaining a game picture image generated by a game client side when the first game player operates a game;
When a second game player is reported, obtaining a game picture image generated by a game client side when the second game player runs a game;
when the starting path of the game client side started by the third game player is detected to be started by a third party program, acquiring a game picture image generated by the game client side when the third game player runs a game;
And when detecting that the target code segment of the fourth game player running the game client is different from the preset standard code segment, acquiring a game picture image generated by the game client when the fourth game player running the game.
10. The method of game plug-in identification of claim 1, further comprising:
If the image to be detected is identified to belong to the plug-in image, a punishment instruction for punishing the game account corresponding to the image to be detected is generated.
11. A game plug-in identification device, the device comprising:
the acquisition module is used for acquiring game picture images containing target objects;
the preprocessing module is used for marking the target objects in the game picture images, and the number of the marked target objects in at least one game picture image is at least one;
The generation module is used for generating a training sample set according to the noted game picture image;
The training module is used for selecting any one of the marked game picture images in the training sample set as a training sample;
the training module is also used for inputting the training sample into a target network model and extracting an original feature map;
The training module is further used for extracting the characteristics of the original characteristic image through a convolution kernel to obtain a plurality of scale characteristics of different receptive fields in the training sample, performing up-sampling processing on the plurality of scale characteristics of the different receptive fields in the training sample, and then performing characteristic fusion to obtain a target characteristic image of the training sample; the up-sampling process is used for adjusting the sizes of the scale features under different sensing fields to be consistent with the sizes of the original feature images;
The training module is further used for acquiring a position offset corresponding to a target position of each target object in the training sample, a support rate of the target position and a class probability of an externally hung expression class to which the target object belongs based on a target feature map of the training sample;
The training module is further configured to generate a recognition result of each training sample based on a position offset corresponding to a target position of each target object in each training sample, a support rate of the target position, and a class probability of an externally hung expression class to which the target object belongs;
The training module is further configured to optimize the target network model based on a difference between the recognition result of the training sample and the real result of the training sample, so as to obtain a trained target network model;
And the identification module is used for identifying whether the image to be detected belongs to the plug-in image or not based on the trained target network model.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is adapted to be loaded by a processor for performing the steps of the game plug-in identification method according to any of claims 1-10.
13. A computer device, characterized in that it comprises a processor and a memory, in which a computer program is stored, the processor being arranged to perform the steps of the game plug-in identification method according to any of claims 1-10 by invoking the computer program stored in the memory.
CN202011166918.5A 2020-10-27 2020-10-27 Game plug-in identification method and device, storage medium and computer equipment Active CN112206541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011166918.5A CN112206541B (en) 2020-10-27 2020-10-27 Game plug-in identification method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011166918.5A CN112206541B (en) 2020-10-27 2020-10-27 Game plug-in identification method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112206541A CN112206541A (en) 2021-01-12
CN112206541B true CN112206541B (en) 2024-06-14

Family

ID=74057123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011166918.5A Active CN112206541B (en) 2020-10-27 2020-10-27 Game plug-in identification method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112206541B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113289346A (en) * 2021-05-21 2021-08-24 网易(杭州)网络有限公司 Task model training method and device, electronic equipment and storage medium
CN113546398A (en) * 2021-07-30 2021-10-26 重庆五诶科技有限公司 Chess and card game method and system based on artificial intelligence algorithm
CN115661585B (en) * 2022-12-07 2023-03-10 腾讯科技(深圳)有限公司 Image recognition method and related device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110812845A (en) * 2019-10-31 2020-02-21 腾讯科技(深圳)有限公司 Plug-in detection method, plug-in recognition model training method and related device
CN111368712A (en) * 2020-03-02 2020-07-03 四川九洲电器集团有限责任公司 Hyperspectral image disguised target detection method based on deep learning
CN111803956A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Method and device for determining game plug-in behavior, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102173592B1 (en) * 2018-10-16 2020-11-03 주식회사 카카오게임즈 Method for detecting abnormal game play
CN109886286B (en) * 2019-01-03 2021-07-23 武汉精测电子集团股份有限公司 Target detection method based on cascade detector, target detection model and system
CN110414432B (en) * 2019-07-29 2023-05-16 腾讯科技(深圳)有限公司 Training method of object recognition model, object recognition method and corresponding device
CN111228821B (en) * 2020-01-15 2022-02-01 腾讯科技(深圳)有限公司 Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
CN111389013B (en) * 2020-03-19 2023-08-22 网易(杭州)网络有限公司 Automatic hanging detection method, device, equipment and storage medium in game

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110812845A (en) * 2019-10-31 2020-02-21 腾讯科技(深圳)有限公司 Plug-in detection method, plug-in recognition model training method and related device
CN111368712A (en) * 2020-03-02 2020-07-03 四川九洲电器集团有限责任公司 Hyperspectral image disguised target detection method based on deep learning
CN111803956A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Method and device for determining game plug-in behavior, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112206541A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112206541B (en) Game plug-in identification method and device, storage medium and computer equipment
CN108304758B (en) Face characteristic point tracking method and device
CN111260665B (en) Image segmentation model training method and device
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN110163082A (en) A kind of image recognition network model training method, image-recognizing method and device
CN110704661B (en) Image classification method and device
CN110738211A (en) object detection method, related device and equipment
CN108434740A (en) A kind of method and device that policy information determines
CN111444826B (en) Video detection method, device, storage medium and computer equipment
CN110766081B (en) Interface image detection method, model training method and related device
CN109345553A (en) A kind of palm and its critical point detection method, apparatus and terminal device
CN112183356A (en) Driving behavior detection method and device and readable storage medium
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN113190646A (en) User name sample labeling method and device, electronic equipment and storage medium
CN113344184A (en) User portrait prediction method, device, terminal and computer readable storage medium
CN114299546A (en) Method and device for identifying pet identity, storage medium and electronic equipment
CN114255347A (en) Image detection method and device, electronic equipment and computer readable storage medium
CN108932704A (en) Image processing method, picture processing unit and terminal device
CN116259083A (en) Image quality recognition model determining method and related device
CN114387624A (en) Pedestrian re-recognition method and device based on attitude guidance and storage medium
CN113344628A (en) Information processing method and device, computer equipment and storage medium
CN112989957A (en) Safety monitoring method and system suitable for equipment cabinet
CN115088007A (en) Risk assessment method and device, electronic equipment and storage medium
CN117115596B (en) Training method, device, equipment and medium of object action classification model
CN111760290B (en) Information processing method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant