CN108629180B - Abnormal operation determination method and device, storage medium and electronic device - Google Patents

Abnormal operation determination method and device, storage medium and electronic device Download PDF

Info

Publication number
CN108629180B
CN108629180B CN201810272744.7A CN201810272744A CN108629180B CN 108629180 B CN108629180 B CN 108629180B CN 201810272744 A CN201810272744 A CN 201810272744A CN 108629180 B CN108629180 B CN 108629180B
Authority
CN
China
Prior art keywords
target
image
abnormal operation
network
participating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810272744.7A
Other languages
Chinese (zh)
Other versions
CN108629180A (en
Inventor
晁阳
陆遥
李东
孙广元
卫然
郑滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810272744.7A priority Critical patent/CN108629180B/en
Publication of CN108629180A publication Critical patent/CN108629180A/en
Application granted granted Critical
Publication of CN108629180B publication Critical patent/CN108629180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/554Detecting local intrusion or implementing counter-measures involving event detection and direct action
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/75Enforcing rules, e.g. detecting foul play or generating lists of cheating players
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/105Multiple levels of security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Abstract

The invention discloses a method and a device for determining abnormal operation, a storage medium and an electronic device. Wherein, the method comprises the following steps: acquiring a target image, wherein the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene; identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene; and determining whether the first object has abnormal operation in the process of participating in the target task based on the position relation between the second object and the third object. The invention solves the technical problem that abnormal behaviors in the virtual scene cannot be detected in the related technology.

Description

Abnormal operation determination method and device, storage medium and electronic device
Technical Field
The invention relates to the field of Internet, in particular to a method and a device for determining abnormal operation, a storage medium and an electronic device.
Background
The network game is an online game which takes the internet as a transmission medium to realize entertainment, leisure, communication and virtual achievement. The game player can log in the game by running the client program, and the game provider provides a game virtual scene, and the players can perform game operations in the virtual scene relatively freely and openly, wherein the game operations are performed relatively freely and openly, and the game operations are allowed by executing game rules. Plug-ins refer to cheating programs that are created by changing part of programs in a game by using a computer technology aiming at one or more online games, so that players can execute operations (namely abnormal operations) which are not allowed by game rules in the virtual scene, and the plug-ins seriously damage the reality, accuracy and fairness of game data.
In the case of a situation of a cheating, a traditional anti-cheating method cannot directly detect abnormal operation, but network transmission data encryption is adopted, wherein the network transmission data encryption is used for encrypting game data in a transmission process, but in order to ensure smooth game operation, a complex encryption algorithm with high safety performance is not suitable for data encryption, so that encrypted data are easy to analyze and forge, and a client side of a network game has the possibility of being decompiled, so that the network transmission data encryption method is easy to crack.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining abnormal operation, a storage medium and an electronic device, which are used for at least solving the technical problem that abnormal behaviors in a virtual scene cannot be detected in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a method of determining an abnormal operation, including: acquiring a target image, wherein the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene; identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene; and determining whether the first object has abnormal operation in the process of participating in the target task based on the position relation between the second object and the third object.
According to another aspect of the embodiments of the present invention, there is also provided an abnormal operation determination apparatus, including: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target image, the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene; the identification unit is used for identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visible range of the first object in the virtual scene; and the determining unit is used for determining whether the first object has abnormal operation in the process of participating in the target task or not based on the position relation between the second object and the third object.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program which, when executed, performs the above-described method.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method through the computer program.
In the embodiment of the invention, when the first object is required to be judged whether abnormal operation exists in the process of participating in the target task, a target image is obtained, the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling the first object participating in the target task in the virtual scene; identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene; whether abnormal operation exists in the first object in the process of participating in the target task is determined based on the position relation between the second object and the third object, the technical problem that abnormal behaviors in the virtual scene cannot be detected in the related technology can be solved, and the technical effect of detecting the abnormal behaviors in the virtual scene is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware environment for a method of determining abnormal operation according to an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of determining abnormal operation in accordance with embodiments of the present invention;
FIG. 3 is a schematic diagram of an alternative virtual scene according to an embodiment of the invention;
FIG. 4 is a schematic illustration of an alternative positive sample image according to an embodiment of the invention;
FIG. 5 is a schematic illustration of an alternative negative example image according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative neural network model in accordance with embodiments of the present invention;
FIG. 7 is a schematic illustration of an alternative game image according to an embodiment of the present invention;
FIG. 8 is a flow chart of an alternative method of determining abnormal operation in accordance with embodiments of the present invention;
FIG. 9 is a schematic diagram of an alternative abnormal operation determination apparatus according to an embodiment of the present invention;
and
fig. 10 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
3D game: the three-dimensional electronic game based on three-dimensional computer graphics, including but not limited to a multi-player online network 3D game and a single 3D game for playing games, can be realized based on a virtual reality game system established by a 3D game system, has universal applicable attributes to platforms, and for example, 3D games in a game host platform, a mobile phone game platform and a PC (personal computer) end game platform all belong to 3D games.
A First-person shooter game FPS (First-person shooter game) belonging to one branch of the action game ACT, and as the name suggests, the First-person perspective shooter game is a shooting game performed at a subjective perspective of a player.
According to an aspect of embodiments of the present invention, a method embodiment of a method of determining abnormal operation is provided.
Alternatively, in the present embodiment, the control method of the above object may be applied to a hardware environment constituted by the server 101 and the terminal 103 as shown in fig. 1. As shown in fig. 1, a server 101 is connected to a terminal 103 through a network, which may be used to provide services (such as game services, application services, etc.) for the terminal or a client installed on the terminal, and a database 105 may be provided on the server or separately from the server for providing data storage services for the server 101, and the network includes but is not limited to: the terminal 103 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network. Fig. 2 is a flowchart of an alternative method for determining abnormal operation according to an embodiment of the present invention, and as shown in fig. 2, the method may include the following steps:
step S202, when it is necessary to determine whether the first object has abnormal operation in the process of participating in the target task, the server may obtain a target image, where the target image is an image of a virtual scene displayed on the target client, and the target client is used to control the first object participating in the target task in the virtual scene.
The target client is a client of a target application, a virtual scene is a virtual scene provided by the target application, a first object, a second object and a third object are objects in the virtual scene, such as objects (such as human body structures and disease features) in a medical virtual scene, game characters, pets and scene objects (such as animals, plants, obstacles and buildings) in the virtual scene of the game application, military personnel and military equipment in a virtual scene of a military simulation, objects in a virtual scene of an industrial simulation, and the like.
The above-mentioned abnormal operation is relative to the normal operation, and the normal operation refers to an operation allowed by a target application executed in a virtual scene, and taking a game application as an example, a game character can only see other characters within a certain visual range (such as 200 meters), the line of sight of the other side of the game character relative to the obstacle when the obstacle occurs is blocked, bullets of used shooting weapons are limited, the other characters outside the visual range cannot pass through obstacles such as buildings and trees, and the like, whereas the abnormal operation is that the game character can see other characters outside the visual range, the content of the other side of the obstacle relative to the game character when the obstacle occurs is still visible by the game character, the bullets of used shooting weapons are unlimited, the other obstacles such as buildings and trees can pass through, and the like.
In step S204, the server identifies a second object and a third object in the target image, where the second object is an object participating in the target task in the virtual scene and allowed to be attacked by the first object, and the third object is used to prevent the second object from appearing in the visual range of the first object in the virtual scene.
The target tasks are tasks executed in a virtual scene, such as game tasks in a game scene and the like, operation tasks in a medical virtual scene and the like, military tasks in a military virtual scene and the like, industrial simulation tasks in an industrial simulation virtual scene and the like.
In step S206, the server determines whether the first object has abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object.
As shown in fig. 3, the second object 302 may be a non-Player-controlled character npc (non Player character) or a Player character in the virtual scene. The third object 303 is operable to prevent the second object 302 from appearing within the visual range of the first object 301 in the virtual scene, i.e. the normal operation specified by the target application is that the third object 303 is operable to block the line of sight of the first object in the virtual scene, as in fig. 3, the third object 303 divides the space into two parts, one part being the space 304 on the other side (e.g. to the right of the third object in fig. 3) with respect to the first object, the right boundary surface of this part of the space may extend wirelessly to the right, this part of the space being invisible to the first object due to being blocked by the third object, the other part being the space outside this part.
When the second object is occluded by the third object, the second object should not appear in the visible range of the first object, i.e. in the target image, so for the occluded second object, it can be determined whether the first object has abnormal operation in the process of participating in the target task according to the position relationship between the second object and the third object, in other words, when the second object is occluded by the third object, if the second object appears in the visible range of the first object, the second object is considered to be abnormal operation.
In the above embodiments, the control method of the object of the embodiments of the present invention is described as an example executed by the server 101, but the method of the present invention may be executed by the terminal 103, or may be executed by both the server 101 and the terminal 103. The terminal 103 may execute the control method of the object according to the embodiment of the present invention by a client installed thereon.
Through the above steps S202 to S206, when it is necessary to determine whether the first object has an abnormal operation in the process of participating in the target task, the target image is obtained, the target image is an image of a virtual scene displayed on the target client, the second object and the third object are identified in the target image, and it is determined whether the first object has an abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object.
In the technical solution provided in step S202, when it is necessary to determine whether there is an abnormal operation in the process of participating in the target task, the target image may be acquired.
In an alternative embodiment, the screen capture logic may be pre-embedded on the client, and the client performs screen capture on an image (such as a game image) of a virtual scene displayed on the target client during operation and sends the image to the server.
In the technical solution provided in step S204, the scheme of identifying the second object and the third object in the target image may be implemented as follows: and identifying a second object and a third object in the target image through the target model, wherein the target model is a convolutional neural network model used for identifying the second object and the third object in the target image through feature matching, the second object is an object which participates in a target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene.
(1) With respect to training of target models
Before the second object and the third object in the target image are identified through the target model, the target model may be trained in advance:
step 1, data marking is performed to obtain a positive sample set and a negative sample set, the training images included in the positive sample set are images with abnormal operation, for example, the images with the second object 302 on the wall surface shown in fig. 4, the adopted marks are "abnormal operation", the training images included in the negative sample set are images with normal operation, for example, the images without the second object 302 on the wall surface shown in fig. 5, and the adopted marks are "normal operation".
For example, a first person image of a 3W FPS game is subjected to a series of pre-processing steps, including but not limited to one or more of angle correction, light normalization, normal scaling, etc., to produce an image with a first perspective criterion, selecting an image size of 256 x 256 (including edge images), and then cropping 224 x 224 (i.e., the actual size of the image).
And 2, training an original model (an initial convolutional neural network model) by using the positive sample set and the negative sample set to initialize the weight parameters of each node in the original model to obtain an intermediate model. For example, the input to the network is a 224 x 224 image, the above-described scaling of the image is maintained, and the smallest edge is scaled to match the input size of the network, and then a portion of the image is cut from the middle.
And 3, testing the intermediate model by using verification data (unmarked images), if the testing accuracy reaches a certain threshold (such as 90% and 99%), considering that the model is trained, and taking the intermediate model as a final target model, otherwise, considering that the intermediate model is under-fitted, continuing training, but preventing over-fitting.
(2) Use of object model
The above method of the present application can be divided into three parts (or called as three software models) from the software, and the following describes the functions of the method one by one.
Software module I and image preprocessing module
When the first object and the second object in the target image are identified through the target model, preprocessing of the target image can be realized through the image preprocessing module so that the preprocessed target image is used as an input of the target model, and the preprocessing comprises processing the length of processing of the target image into a first threshold value and processing the width of the target image into a second threshold value.
Software module II, object recognition and semantic segmentation module
Alternatively, the target recognition and semantic segmentation module may be implemented by the fast-RCNN and FCN algorithms.
When the first object and the second object in the target image are identified through the target model, the target identification and semantic segmentation module may be configured to match image features of an image region in the target image with the target features, where the target features include features of a first class of objects and features of a second class of objects learned by the target model, the first class of objects includes the first object and the second object, and the second class of objects includes the third object.
Optionally, the target model includes a first network (e.g., a fast-RCNN network) and a second network (e.g., an FCN network), the image features of the image region in the target image are matched with the target features through the target model, and a first region in the target image can be identified through the first network, where the first region is an image region where a second object in the target image is located; and identifying a target area in the target image through the second network, wherein the target area is a first area which is identified by the second network and is formed with a third object or an image area which is identified from the target image and is formed with the second object and the third object.
After the first network identifies a first region in the target image, a matching result can be determined through the target model, the matching result is used for indicating the target region in the target image, and the target region is an image region where a second object and a third object are located, wherein the second object and the third object are determined by the target model through matching of image features and target features.
In an alternative embodiment, the software module may include three sub-modules, which are an RPN sub-module, a Fast-RCNN sub-module, and a semantic segmentation sub-module FCN, where Fast-RCNN is an improved target detection method based on RCNN, RCNN is Fully called registers with CNN features, Convolutional Neural Network CNN is Fully called Convolutional Network, CNN is a method for deep learning for target detection, FCN is Fully called Fully Convolutional Network, FCN is a Convolutional Network for semantic segmentation, RPN is Fully called Region pro-focal Network, and RPN is a full convolution Network for regional target detection, and the above modules are described below one by one.
RPN submodule:
the module mainly detects the area position of a person (i.e., a second object) under an image (e.g., a scene image of a building in a game), taking a first person perspective view FPS as an example, when a player (i.e., a first object) uses a perspective hook (a plug-in common in an FPS game can penetrate through the building to shoot), the player can directly see the position of an enemy (i.e., the second object) through the building (i.e., a third object), and at this time, the RPN module is used, and the structure is as shown in fig. 6 (including a feature map, a middle layer, a classification layer, a frame regression layer, and the like), so that which objects exist in the picture at the perspective view can be determined, and then the positions of the objects are located. The module inputs a frame of target image of a first view angle, i.e. performs object target detection od (object detection), and the output of the classification layer is a set of rectangular object disposals (or called candidate boxes, i.e. first regions), each first region has a confidence score, which is a full convolution network fcn (full volumetric network), sliding a miniature network scanning window on the feature map outputted by the last layer of the shared convolution layer of the FCN (each convolution kernel can extract one feature map, namely how many different convolution kernels and how many different feature maps are), the RPN uses a network with multiple (e.g. 9) layers of convolution depth to obtain the feature map and the first region regions (e.g. the right candidate box 1 to the candidate box K) related thereto.
Fast-RCNN submodule:
the result of the RPN is further optimized by the module, although the result of the RPN obtains the position of a second object (such as an opposite player), the external application scene needs high-precision analysis, with the attention mechanism attentions mechanisms added according to the first areas proposed by the RPN, the Fast RCNN module will pay attention to the areas of the important look proposed by the RPN (i.e. the first areas mentioned above), since the aim is to detect people in the building, when the neural network is trained, a binary label is set for each candidate box anchor to identify whether the candidate box anchor is an object or not, then RPN and Fast RCNN are fused into a network for training, during the forward pass, the RPN creates several fixed first regions, then trains the Fast-RCNN detector, during the reverse transmission, the loss of RPN and the loss of Fast R-CNN are combined. Because the RPN has a poor detection effect on small objects, in an actual scene, a plurality of people (namely second objects) crouch in corners, and are not complete people standing at this time, and people appear in pictures of buildings, but do not necessarily use perspective hangers, and possibly just appear in windows, glass and roofs, a semantic segmentation module can be introduced to judge what the object environment around the players is.
Semantic partitioning submodule FCN:
the module is used for identifying and distinguishing the squat players and the segmentation of surrounding buildings, FCN is used, a semantic segmentation network is used, and a new innovative method is provided by combining actual scenes, in FCN, pixel point level identification is obtained, each pixel of an input image is provided with a corresponding judgment label on output, deconvolution processing is carried out on any layer of a volume base layer to obtain a final image, different filters can be used for obtaining different output data due to insufficient precision of FCN, a plurality of (such as 100) common character rule filters can be designed (the filters can distinguish that a second object is overlapped with a third object but belongs to normal operation or abnormal operation), after the output layer of FCN, a designed rule template is used for judging whether the squat is completely unknown, good identification effect is achieved, and after the problem is solved, objects around a human being can be detected, semantic segmentation is utilized to define a plurality of classes (such as 1000 classes) of the convolutional neural network CNN, then sampling is carried out, and finally two major classes, an image set of a plurality of minor classes (such as 5 classes) is generated, the two major classes are images of people who are obstacles such as buildings and images of people who are obstacles such as buildings, the images of people who are obstacles such as buildings are further divided into a plurality of minor classes, for example, the images of people who are obstacles such as buildings are in windows, the people are on roofs, the people are beside smoke pipes, the people are in open doors, and the images of people who are not in buildings can be considered as having no perspective hanging, and the images of people who are in buildings are considered as having external hanging, but the people are in windows, the people are on roofs, the people are beside smoke pipes, and the people are in open doors, as shown in FIG. 7 (the white area is a human in FIG. 7).
In this embodiment, the present application innovatively provides a fast-RCNN (a highly efficient target detection method) and a solution of the FCN, a target region (region payload) of an image is obtained by training an RPN network in the fast-RCNN, then classification detection is performed by a fast-RCNN module in the fast-RCNN, and FCN semantic segmentation is used to obtain multiple classes (e.g., 5 classes) of images, and then whether an image appears in a second object (e.g., a player character) or not and a position relationship between the second object and a third object (e.g., a building) in the image are determined.
In the technical solution provided in step S206, it is determined whether the first object has an abnormal operation in the process of participating in the target task based on the positional relationship between the second object and the third object.
The technical solution of step S206 may be implemented by a software module three (i.e., a multi-classification module, also referred to as a third network and a fourth network described below), and an optional multi-classification module may be implemented by a Long Short-Term Memory network LSTM algorithm (which is fully called Long Short-Term Memory and is a time recurrent neural network suitable for processing and predicting important events with relatively Long intervals and delays in a time sequence), which is described in detail below.
The determination of whether the first object has abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object mainly comprises the following two aspects:
determining that the abnormal operation exists in the first object in the process of participating in the target task when the position relationship between the second object and the third object belongs to the position relationship of the target type, wherein the position relationship of the target type is the position relationship between the object which is extracted from the first image and has the abnormal operation and the third object;
and secondly, determining that the first object does not have abnormal operation in the process of participating in the target task under the condition that the position relation between the second object and the third object does not belong to the position relation of the target type.
The above target image may be an image, in which case, the identification of the abnormal behavior may be implemented through a third network (i.e., a multi-classification module) of the target model, and optionally, the determination of whether the first object has the abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object may include steps 1 to 3 as follows:
step 1, identifying a first position relationship between a second object and a third object in a target area of a target image through a third network, and searching whether a second position relationship exists in a relationship set, wherein the relationship set is used for storing the position relationship belonging to a target type, the second position relationship is the same position relationship with the first position relationship in the relationship set, and the third network is used for identifying the first position relationship between the second object and the third object through semantic analysis.
It should be noted that the image is composed of many pixels (pixels), and the semantic analysis, as the name implies, is to group or divide the pixels according to the difference of expressing semantic meaning in the image, the machine automatically divides and identifies the content in the image, such as giving a photo of a person riding a motorcycle, the machine should be able to generate the image after judging, annotate the person with red color in the image, and annotate the vehicle with green color; this is understood in the present application to be an analysis of which part of the image is the second object and which part is the third object.
As shown in fig. 3, in the game logic, the third object 303 as an obstacle is located between the first object 301 (e.g., the current player) and the second object 302 (the competing player of the current player) and is located in the space 304, when the current player does not see the competing player, and when the competing player appears in the game screen (i.e., the target screen) of the current player, as seen through the obstacle, it is indicated that there is an abnormal behavior, and when the position relationship between the second object and the third object (i.e., the second position relationship, which is used to indicate that the second object is seen through and superimposed on the third object) belongs to the position relationship caused by the abnormal operation.
If the second object 302 is not in the space 304 and is within the visual distance of the first object in the game logic, it indicates that the second object is visible to the first object, and the second object 302 is not mapped onto the third object 303, i.e. does not overlap with the third object 303, and the positional relationship between the second object and the third object is normal, and no corresponding relationship can be found in the relationship set.
And 2, determining that abnormal operation exists in the process of the first object participating in the target task under the condition that the second position relation is found from the relation set.
And 3, under the condition that the second position relation is not found from the relation set, determining that abnormal operation does not exist in the process of participating in the target task of the first object.
In this case, in the first scheme, it may be determined that the first object has an abnormal operation in the process of participating in the target task when the positional relationship between the second object and the third object in the N second images belongs to the positional relationship of the target type, and the integer N is not greater than the integer M; on the contrary, when there is no second image whose positional relationship between the second object and the third object belongs to the positional relationship of the target type, or the number of second images whose positional relationship between the second object and the third object belongs to the positional relationship of the target type does not reach N, it is determined that there is no abnormal operation in the process of participating in the target task for the first object.
Optionally, the target image may also be multiple images, when it is determined that the first object has abnormal operation in the process of participating in the target task, the abnormal operation may be performed through a fourth network in the target model, and when the second positional relationship is found from the relationship set, the confidence that the positional relationship between the second object and the third object of each second image in the target image identified by the third network is the first positional relationship is used as an input of the fourth network, so that when the positional relationship between the second object and the third object in the N second images in the fourth network belongs to the target-type positional relationship, it is determined that the first object has abnormal operation in the process of participating in the target task.
Alternatively, when the target image may be a plurality of images, in addition to the number of second images with abnormal operation, a corresponding weight (used for indicating the probability of abnormal operation), may be configured for each second image, a multi-classification image label set may be obtained according to a weight value obtained from an image target area in each second image and a semantic segmentation point, and input into the LSTM network, and by using the characteristics of the loop network, a first perspective image under consecutive frames may be obtained, and according to the multi-classification label weight value, a soft-max algorithm is used to perform multi-classification to obtain a final classification result, and whether the player has abnormal operation (for example, uses perspective hang) may be determined.
As an alternative embodiment, details will be given below by taking an example of applying the technical solution of the present application to detect a cheating plug-in a game.
In the related technology, a reporting method is adopted to detect the perspective hanging, the detection effect of the efficiency on the perspective hanging is basically 0, a large amount of manpower and material resources are consumed, the response mechanism is very slow, or a mechanism of using a ghost engine Shader is adopted, in a large scene, a preset object material is used, if a player uses the perspective hanging, the object material in the scene can be subjected to color conversion, and therefore the fact that the player uses the plug-in is judged.
In the schemes, a reporting mechanism firstly judges effective reporting and ineffective reporting, and a large amount of manpower and material resources are consumed; by utilizing the Shader mechanism, very many special object materials need to be deployed in a massive scene, and the modification of the plug-in client and the response of the server are difficult, so the schemes have little effect on the detection technology of the perspective hanging,
in the technical scheme of the application, the method is realized by adopting a mode of fast-RCNN + FCN + LSTM, a perspective hanger which cannot be detected by a traditional method is innovatively tried to be detected, and extremely high accuracy is obtained on a simulation set, so that the method can be applied to formal product application.
For the deployment of the service: the deep learning model (i.e., the target model) of fast-RCNN + FCN + LSTM can be deployed on the server; the deep learning module can call the detection module tensorflow to realize the detection of the perspective hanging based on the python script. The overall implementation flow is shown in fig. 8:
step S802, a data pulling module pulls data (namely a target image) of a first visual angle of a player in real time or at regular time, and a MapReduce (which is a programming model and is used for parallel operation of a large-scale data set) distributed framework is used for primary definition and screening;
step S804, utilizing the packaged image preprocessing module SDK to preprocess the first visual angle image obtained, and preprocessing the first visual angle image to obtain a corresponding data set and a corresponding training set (such as a positive sample set and a negative sample set);
step S806, inputting data into a target model (namely a fast-RCNN + FCN + LSTM deep learning model), wherein the data can be input by 224 × 224, obtaining a high-dimensional candidate frame region proposals sequence through multilayer convolution, finally obtaining 5 classified images according to semantic segmentation results obtained by FCN, inputting the 5 classified images into a double-layer LSTM network, and obtaining corresponding classification results according to a linear regression function soft-max;
step S808, determining whether a plug-in is used according to the calculated final probability according to the classification result and the weight of each candidate frame;
step S810, sending the suspicious list using the plug-in to the service party.
The technical scheme can be used for providing plug-in detection service by a method of packaging a background interface, for example, providing an Application Programming Interface (API) of a high-level neural network, writing the plug-in detection service by python, running the plug-in detection service based on Tensorflow, Theano and a CNTK rear end (which can be called Keras), and is a real-time or off-line calculation process, because the related calculation amount is large, and the plug-in is an application scene with certain response time, and an off-line calculation strategy is preferably adopted.
By adopting the technical scheme of the application, the blank of striking the perspective hanging by utilizing AI is filled; the appearance of the perspective hanging can be detected to a greater extent, and the breakthrough from scratch is completed; the method lays a solid foundation for the first-person shooting game, can solve the problem of the perspective hanging to a great extent, can effectively help a business party to identify the perspective hanging in comparison with the situation that the perspective hanging cannot be detected in the related technology, and forms effective accumulated value and effect on the aspect of an external hanging strategy for an item group.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
According to another aspect of the embodiments of the present invention, there is also provided an abnormal operation determination apparatus for implementing the above abnormal operation determination method. Fig. 9 is a schematic diagram of an alternative abnormal operation determination apparatus according to an embodiment of the present invention, as shown in fig. 9, the apparatus may include: an acquisition unit 901, a recognition unit 903, and a determination unit 905.
An obtaining unit 901, configured to obtain a target image, where the target image is an image of a virtual scene displayed on a target client, and the target client is configured to control a first object participating in a target task in the virtual scene;
an identifying unit 903, configured to identify a second object and a third object in the target image, where the second object is an object participating in the target task in the virtual scene and allowed to be attacked by the first object, and the third object is used to prevent the second object from appearing in a visible range of the first object in the virtual scene;
a determining unit 905 configured to determine whether the first object has an abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object.
It should be noted that the obtaining unit 901 in this embodiment may be configured to execute step S202 in this embodiment, the identifying unit 903 in this embodiment may be configured to execute step S204 in this embodiment, and the determining unit 905 in this embodiment may be configured to execute step S206 in this embodiment.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the module, when it is required to judge whether abnormal operation exists in the process of the first object participating in the target task, the target image is obtained, the target image is an image of a virtual scene displayed on the target client, the second object and the third object are identified in the target image, and whether abnormal operation exists in the process of the first object participating in the target task is determined based on the position relation between the second object and the third object, so that the technical problem that abnormal behavior in the virtual scene cannot be detected in the related art can be solved, and the technical effect of detecting the abnormal behavior in the virtual scene is achieved.
The above-mentioned determination unit may include: the first determination module is used for determining that the first object has abnormal operation in the process of participating in the target task under the condition that the position relationship between the second object and the third object belongs to the position relationship of the target type, wherein the position relationship of the target type is the position relationship between the object which is extracted from the first image and has abnormal operation and the third object; and the second determining module is used for determining that the first object does not have abnormal operation in the process of participating in the target task under the condition that the position relation between the second object and the third object does not belong to the position relation of the target type.
Optionally, the target image includes M second images, wherein the first determining module is further configured to: and in the case that the position relation between the second object and the third object in the N second images belongs to the position relation of the target type, determining that the first object has abnormal operation in the process of participating in the target task, wherein the integer N is not more than the integer M.
Optionally, the object model may include a third network, and the determining unit may further include: the searching module is used for identifying a first position relation between a second object and a third object in a target area of a target image through a third network, and searching whether the second position relation exists in a relation set, wherein the relation set is used for storing the position relation belonging to a target type, the second position relation is the same as the first position relation in the relation set, and the third network is used for identifying the first position relation between the second object and the third object through semantic analysis; the third determining module is used for determining that the first object has abnormal operation in the process of participating in the target task under the condition that the second position relation is found from the relation set; and the fourth determining module is used for determining that the first object does not have abnormal operation in the process of participating in the target task under the condition that the second position relation is not found from the relation set.
The identification unit may be further configured to identify a second object and a third object in the target image through a target model, where the target model is a convolutional neural network model for identifying the second object and the third object in the target image through feature matching.
The above-mentioned identification unit may include: the input module is used for taking the preprocessed target image as the input of a target model, wherein the preprocessing comprises processing the length of the target image processing into a first threshold value and processing the width of the target image into a second threshold value; the matching module is used for matching the image characteristics of the image area in the target image with the target characteristics through the target model, wherein the target characteristics comprise the characteristics of a first class of objects and the characteristics of a second class of objects which are learned by the target model, the first class of objects comprise a first object and a second object, and the second class of objects comprise a third object; and the fifth determining module is used for determining a matching result through the target model, wherein the matching result is used for indicating a target area in the target image, and the target area is an image area where a second object and a third object are located, which are determined by the target model through matching of the image characteristics and the target characteristics.
The object model may include a first network and a second network, wherein the matching module is further operable to: identifying a first area in the target image through a first network, wherein the first area is an image area where a second object in the target image is located; and identifying a target area in the target image through the second network, wherein the target area is a first area which is identified by the second network and is formed with a third object, or an image area which is identified from the target image and is formed with the second object and the third object.
Optionally, the object model may include a fourth network, and the third determining module is further configured to: and under the condition that the second position relationship is found from the relationship set, taking the confidence coefficient that the position relationship between the second object and the third object of each second image in the target images identified by the third network is the first position relationship as the input of the fourth network, so that the first object is determined to have abnormal operation in the process of participating in the target task under the condition that the position relationship between the second object and the third object in the N second images of the fourth network belongs to the position relationship of the target type.
By adopting the technical scheme of the application, the blank of utilizing AI to strike the external hanging such as the perspective hanging is filled; the appearance of the perspective hanging can be detected to a greater extent, and the breakthrough from scratch is completed; the method lays a solid foundation for the first-person shooting game, can solve the problem of the perspective hanging to a great extent, can effectively help a business party to identify the perspective hanging in comparison with the situation that the perspective hanging cannot be detected in the related technology, and forms effective accumulated value and effect on the aspect of an external hanging strategy for an item group.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may be operated in a hardware environment as shown in fig. 1, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiment of the present invention, there is also provided a server or a terminal for implementing the above-described determination method of abnormal operation.
Fig. 10 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 10, the terminal may include: one or more (only one shown in fig. 10) processors 1001, memory 1003, and transmission apparatus 1005 (such as the transmission apparatus in the above embodiments), as shown in fig. 10, the terminal may further include an input-output device 1007.
The memory 1003 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for determining abnormal operation in the embodiment of the present invention, and the processor 1001 executes various functional applications and data processing by running the software programs and modules stored in the memory 1003, that is, implements the method for determining abnormal operation. The memory 1003 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1003 may further include memory located remotely from the processor 1001, which may be connected to a terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmitting device 1005 is used for receiving or transmitting data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmitting device 1005 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 1005 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Among them, the memory 1003 is used to store an application program, in particular.
The processor 1001 may call an application stored in the memory 1003 via the transmitting device 1005 to perform the following steps:
acquiring a target image, wherein the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene;
identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene;
and determining whether the first object has abnormal operation in the process of participating in the target task based on the position relation between the second object and the third object.
The processor 1001 is further configured to perform the following steps:
taking a preprocessed target image as an input of a target model, wherein the preprocessing comprises processing the processed length of the target image into a first threshold value and processing the width of the target image into a second threshold value;
matching image features of an image area in a target image with target features through a target model, wherein the target features comprise features of a first class of objects and features of a second class of objects learned by the target model, the first class of objects comprise a first object and a second object, and the second class of objects comprise a third object;
and determining a matching result through the target model, wherein the matching result is used for indicating a target area in the target image, and the target area is an image area where a second object and a third object are located, which are determined by the target model through matching the image characteristics with the target characteristics.
By adopting the embodiment of the invention, when the first object is required to be judged whether abnormal operation exists in the process of participating in the target task, the target image is obtained, the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling the first object participating in the target task in the virtual scene; identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene; whether abnormal operation exists in the first object in the process of participating in the target task is determined based on the position relation between the second object and the third object, the technical problem that abnormal behaviors in the virtual scene cannot be detected in the related technology can be solved, and the technical effect of detecting the abnormal behaviors in the virtual scene is achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing the determination method of the abnormal operation.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s12, acquiring a target image, wherein the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene;
s14, identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene;
and S16, determining whether the first object has abnormal operation in the process of participating in the target task based on the position relation between the second object and the third object.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s22, taking the preprocessed target image as the input of the target model, wherein the preprocessing comprises processing the length of the target image processing into a first threshold value and processing the width of the target image into a second threshold value;
s24, matching image features of image areas in the target image with target features through the target model, wherein the target features comprise features of first class objects and features of second class objects learned by the target model, the first class objects comprise the first objects and the second objects, and the second class objects comprise the third objects;
and S26, determining a matching result through the target model, wherein the matching result is used for indicating a target area in the target image, and the target area is an image area where a second object and a third object are located, which are determined by the target model through matching the image characteristics with the target characteristics.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (14)

1. A method for determining abnormal operation, comprising:
acquiring a target image, wherein the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene;
identifying a second object and a third object in the target image, wherein the second object is an object which participates in the target task in the virtual scene and is allowed to be attacked by the first object, and the third object is used for preventing the second object from appearing in the visual range of the first object in the virtual scene;
determining whether the first object has abnormal operation in the process of participating in the target task based on the position relation between the second object and the third object;
wherein identifying the second object and the third object in the target image comprises:
identifying the second object and the third object in the target image through a target model, wherein the target model is a convolutional neural network model used for identifying the second object and the third object in the target image through feature matching.
2. The method of claim 1, wherein determining whether the first object has abnormal operation in participating in the target task based on the positional relationship between the second object and the third object comprises:
determining that the first object has abnormal operation in the process of participating in the target task when the position relationship between the second object and the third object belongs to the position relationship of the target type, wherein the position relationship of the target type is the position relationship between the object which is extracted from the first image and has abnormal operation and the third object;
determining that there is no abnormal operation in the process of participating in the target task for the first object in the case that the positional relationship between the second object and the third object does not belong to the positional relationship of the target type.
3. The method according to claim 2, wherein the target image comprises M second images, and wherein, in the case that the position relationship between the second object and the third object belongs to a position relationship of a target type, the determining that the first object has an abnormal operation in the process of participating in the target task comprises:
and determining that the first object has abnormal operation in the process of participating in the target task under the condition that the position relation between the second object and the third object in the N second images belongs to the position relation of the target type, wherein the integer N is not more than the integer M.
4. The method of claim 3, wherein identifying the second object and the third object in the target image by a target model comprises:
taking the preprocessed target image as an input of the target model, wherein the preprocessing comprises processing the length of the target image processing to be a first threshold value and processing the width of the target image to be a second threshold value;
matching image features of an image area in the target image with target features through the target model, wherein the target features comprise features of a first class of objects and features of a second class of objects learned by the target model, the first class of objects comprise the first object and the second object, and the second class of objects comprise a third object;
determining a matching result through the target model, wherein the matching result is used for indicating a target area in the target image, and the target area is an image area where the second object and the third object are located, which is determined by the target model through matching image features with the target features.
5. The method of claim 4, wherein the target model comprises a first network and a second network, and wherein matching image features of image regions in the target image with target features by the target model comprises:
identifying a first area in the target image through the first network, wherein the first area is an image area where the second object is located in the target image;
identifying the target area in the target image through the second network, wherein the target area is the first area identified by the second network and formed with the third object, or an image area identified from the target image and formed with the second object and the third object.
6. The method of any one of claims 1 to 5, wherein the target model comprises a third network, and wherein determining whether the first object has abnormal operation in the process of participating in the target task based on the position relationship between the second object and the third object comprises:
identifying a first position relationship between the second object and the third object in a target area of the target image through the third network, and searching whether a second position relationship exists in a relationship set, wherein the relationship set is used for storing a position relationship belonging to a target type, the second position relationship is the same position relationship as the first position relationship in the relationship set, and the third network is used for identifying the first position relationship between the second object and the third object through semantic analysis;
determining that abnormal operation exists in the process of the first object participating in the target task under the condition that the second position relation is found from the relation set;
and under the condition that the second position relation is not found from the relation set, determining that abnormal operation does not exist in the process that the first object participates in the target task.
7. The method of claim 6, wherein the target model comprises a fourth network, and wherein determining that the first object has abnormal operation in participating in the target task if the second positional relationship is found from the set of relationships comprises:
and when the second positional relationship is found from the relationship set, taking the positional relationship between the second object and the third object of each second image in the target images identified by the third network as the confidence coefficient of the first positional relationship as the input of the fourth network, so that the first object is determined to have abnormal operation in the process of participating in the target task when the positional relationship between the second object and the third object in N second images of the fourth network belongs to the positional relationship of the target type.
8. An apparatus for determining an abnormal operation, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a target image, the target image is an image of a virtual scene displayed on a target client, and the target client is used for controlling a first object participating in a target task in the virtual scene;
an identifying unit, configured to identify a second object and a third object in the target image, where the second object is an object participating in the target task in the virtual scene and allowed to be attacked by the first object, and the third object is configured to prevent the second object from appearing in a visible range of the first object in the virtual scene, and the identifying the second object and the third object in the target image includes: identifying the second object and the third object in the target image through a target model, wherein the target model is a convolutional neural network model used for identifying the second object and the third object in the target image through feature matching;
a determining unit, configured to determine whether there is an abnormal operation in the process of participating in the target task for the first object based on a position relationship between the second object and the third object.
9. The apparatus of claim 8, wherein the determining unit comprises:
a first determining module, configured to determine that an abnormal operation exists in the first object in the process of participating in the target task when a positional relationship between the second object and the third object belongs to a positional relationship of a target type, where the positional relationship of the target type is a positional relationship between an object extracted from the first image and having an abnormal operation and the third object;
and a second determining module, configured to determine that no abnormal operation exists in the process of participating in the target task for the first object when the positional relationship between the second object and the third object does not belong to the positional relationship of the target type.
10. The apparatus of claim 9, wherein the target image comprises M second images, and wherein the first determining module is further configured to:
and determining that the first object has abnormal operation in the process of participating in the target task under the condition that the position relation between the second object and the third object in the N second images belongs to the position relation of the target type, wherein the integer N is not more than the integer M.
11. The apparatus according to any one of claims 8 to 10, wherein the target model comprises a third network, and wherein the determining unit comprises:
a searching module, configured to identify, through the third network, a first positional relationship between the second object and the third object in a target region of the target image, and search whether a second positional relationship exists in a relationship set, where the relationship set is used to store a positional relationship belonging to a target type, and the second positional relationship is a positional relationship in the relationship set that is the same as the first positional relationship, and the third network is used to identify the first positional relationship between the second object and the third object through semantic analysis;
a third determining module, configured to determine that an abnormal operation exists in the process of participating in the target task for the first object when the second location relationship is found from the relationship set;
a fourth determining module, configured to determine that no abnormal operation exists in the process of participating in the target task for the first object when the second location relationship is not found from the relationship set.
12. The apparatus of claim 11, wherein the target model comprises a fourth network, wherein the third determining module is further configured to:
and when the second positional relationship is found from the relationship set, taking the positional relationship between the second object and the third object of each second image in the target images identified by the third network as the confidence coefficient of the first positional relationship as the input of the fourth network, so that the first object is determined to have abnormal operation in the process of participating in the target task when the positional relationship between the second object and the third object in N second images of the fourth network belongs to the positional relationship of the target type.
13. A storage medium, characterized in that the storage medium comprises a stored program, wherein the program when executed performs the method of any of the preceding claims 1 to 7.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 7 by means of the computer program.
CN201810272744.7A 2018-03-29 2018-03-29 Abnormal operation determination method and device, storage medium and electronic device Active CN108629180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810272744.7A CN108629180B (en) 2018-03-29 2018-03-29 Abnormal operation determination method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810272744.7A CN108629180B (en) 2018-03-29 2018-03-29 Abnormal operation determination method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN108629180A CN108629180A (en) 2018-10-09
CN108629180B true CN108629180B (en) 2020-12-11

Family

ID=63696498

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810272744.7A Active CN108629180B (en) 2018-03-29 2018-03-29 Abnormal operation determination method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN108629180B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109939442B (en) * 2019-03-15 2022-09-09 深圳市腾讯信息技术有限公司 Application role position abnormity identification method and device, electronic equipment and storage medium
CN110102051B (en) * 2019-05-06 2022-12-06 网易(杭州)网络有限公司 Method and device for detecting game plug-in
CN110378247B (en) * 2019-06-26 2023-09-26 腾讯科技(深圳)有限公司 Virtual object recognition method and device, storage medium and electronic device
CN110339576B (en) * 2019-07-23 2020-08-04 网易(杭州)网络有限公司 Information processing method, device and storage medium
CN110496390B (en) * 2019-07-23 2021-01-12 网易(杭州)网络有限公司 Information processing method, device and storage medium
CN110665233B (en) * 2019-08-29 2021-07-16 腾讯科技(深圳)有限公司 Game behavior identification method, device, equipment and medium
CN110765975B (en) * 2019-10-31 2020-11-03 腾讯科技(深圳)有限公司 Method and device for judging cheating behaviors, storage medium and computer equipment
CN110812845B (en) * 2019-10-31 2022-01-07 腾讯科技(深圳)有限公司 Plug-in detection method, plug-in recognition model training method and related device
CN110909630B (en) * 2019-11-06 2023-04-18 腾讯科技(深圳)有限公司 Abnormal game video detection method and device
CN110930417B (en) * 2019-11-26 2023-08-08 腾讯科技(深圳)有限公司 Training method and device for image segmentation model, and image segmentation method and device
CN111035933B (en) * 2019-12-05 2022-04-12 腾讯科技(深圳)有限公司 Abnormal game detection method and device, electronic equipment and readable storage medium
CN111054080B (en) * 2019-12-06 2022-01-11 腾讯科技(深圳)有限公司 Method, device and equipment for intelligently detecting perspective plug-in and storage medium thereof
CN111068333B (en) * 2019-12-20 2021-12-21 腾讯科技(深圳)有限公司 Video-based carrier abnormal state detection method, device, equipment and medium
CN111163294A (en) * 2020-01-03 2020-05-15 重庆特斯联智慧科技股份有限公司 Building safety channel monitoring system and method for artificial intelligence target recognition
CN111228821B (en) * 2020-01-15 2022-02-01 腾讯科技(深圳)有限公司 Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
WO2021159357A1 (en) * 2020-02-12 2021-08-19 深圳元戎启行科技有限公司 Traveling scenario information processing method and apparatus, electronic device, and readable storage medium
CN111428132B (en) * 2020-03-18 2023-09-19 腾讯科技(深圳)有限公司 Data verification method and device, computer storage medium and electronic equipment
CN111921204B (en) * 2020-08-21 2023-09-26 腾讯科技(深圳)有限公司 Control method, device, equipment and storage medium of cloud application program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN106898051A (en) * 2017-04-14 2017-06-27 腾讯科技(深圳)有限公司 The visual field elimination method and server of a kind of virtual role
CN107308645A (en) * 2017-06-07 2017-11-03 浙江无端科技股份有限公司 A kind of method and game client of the plug-in detection of perspective of playing
CN206905620U (en) * 2017-04-12 2018-01-19 刘阳 Augmented reality has an X-rayed sighting system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101561928B (en) * 2009-05-27 2011-09-14 湖南大学 Multi-human body tracking method based on attribute relational graph appearance model
SG11201402176TA (en) * 2011-11-10 2014-06-27 Gamblit Gaming Llc Anti-cheating hybrid game
CN106056091A (en) * 2016-06-08 2016-10-26 惠州学院 Multi-shooting-angle video object identification system and method
CN107705334B (en) * 2017-08-25 2020-08-25 北京图森智途科技有限公司 Camera abnormity detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN206905620U (en) * 2017-04-12 2018-01-19 刘阳 Augmented reality has an X-rayed sighting system
CN106898051A (en) * 2017-04-14 2017-06-27 腾讯科技(深圳)有限公司 The visual field elimination method and server of a kind of virtual role
CN107308645A (en) * 2017-06-07 2017-11-03 浙江无端科技股份有限公司 A kind of method and game client of the plug-in detection of perspective of playing

Also Published As

Publication number Publication date
CN108629180A (en) 2018-10-09

Similar Documents

Publication Publication Date Title
CN108629180B (en) Abnormal operation determination method and device, storage medium and electronic device
CN110166827B (en) Video clip determination method and device, storage medium and electronic device
US11794110B2 (en) System and method for toy recognition
CN109999496B (en) Control method and device of virtual object and electronic device
KR102106135B1 (en) Apparatus and method for providing application service by using action recognition
CN108090561B (en) Storage medium, electronic device, and method and device for executing game operation
Synnaeve et al. A dataset for StarCraft AI and an example of armies clustering
CN111738735B (en) Image data processing method and device and related equipment
CN114331829A (en) Countermeasure sample generation method, device, equipment and readable storage medium
CN110287848A (en) The generation method and device of video
CN110302536A (en) A kind of method for checking object and relevant apparatus based on interactive application
CN111035933B (en) Abnormal game detection method and device, electronic equipment and readable storage medium
CN114241012B (en) High-altitude parabolic determination method and device
CN111821693A (en) Perspective plug-in detection method, device, equipment and storage medium for game
KR101270718B1 (en) Video processing apparatus and method for detecting fire from video
CN115294162B (en) Target identification method, device, equipment and storage medium
CN111298446A (en) Game plug-in detection method and device, computer and readable storage medium
CN107133561A (en) Event-handling method and device
Meng et al. De-anonymization Attacks on Metaverse
CN115082992A (en) Face living body detection method and device, electronic equipment and readable storage medium
CN114743262A (en) Behavior detection method and device, electronic equipment and storage medium
CN114283349A (en) Data processing method and device, computer equipment and storage medium
US9959632B2 (en) Object extraction from video images system and method
CN111860431B (en) Method and device for identifying object in image, storage medium and electronic device
CN117018629A (en) Data processing method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant