CN109939442B - Application role position abnormity identification method and device, electronic equipment and storage medium - Google Patents

Application role position abnormity identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109939442B
CN109939442B CN201910199228.0A CN201910199228A CN109939442B CN 109939442 B CN109939442 B CN 109939442B CN 201910199228 A CN201910199228 A CN 201910199228A CN 109939442 B CN109939442 B CN 109939442B
Authority
CN
China
Prior art keywords
scene
target application
target
position information
application role
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910199228.0A
Other languages
Chinese (zh)
Other versions
CN109939442A (en
Inventor
吴凯
殷赵辉
彭青白
何小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Information Technology Co Ltd filed Critical Shenzhen Tencent Information Technology Co Ltd
Priority to CN201910199228.0A priority Critical patent/CN109939442B/en
Publication of CN109939442A publication Critical patent/CN109939442A/en
Application granted granted Critical
Publication of CN109939442B publication Critical patent/CN109939442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an application role position abnormity identification method and device, electronic equipment and a storage medium, and belongs to the technical field of internet. The method simulates a second virtual scene displayed by the target application on the first device through the first virtual scene, and determines the position identification result of at least one target application role according to the first position information of the target application role in the second virtual scene and at least one scene object in the first virtual scene. Since the target application role is identified in the first virtual scene based on the scene objects. Therefore, whether the position of the target application role is abnormal or not can be accurately identified based on the scene object, and the accuracy of identifying the position abnormality of the application role is greatly improved.

Description

Application role position abnormity identification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of internet, in particular to an application role position abnormity identification method and device, electronic equipment and a storage medium.
Background
In some game applications, application characters are typically employed to perform activities in a virtual scene of the game application on behalf of a user. For example, the user may control the application character to run, jump, etc. in the virtual scene. In the virtual scene, the application role needs to move within the range limited by the game rule. However, some malicious users take cheating measures such that the range of activity of the application character is beyond the limits of the game rules, e.g., to dig into the ground, to cross a wall, to park at high altitude, etc. This can seriously impact the public praise of the gaming application. There is a general need in the art to identify application roles to avoid cheating.
In the related art, the application of the role position identification process may be: the server acquires a coordinate threshold of the application role, wherein the coordinate threshold is used for indicating a target range which can be reached by the application role at the position. For example, in a two application character battle scenario, the coordinate threshold for a certain application character may include the maximum height coordinate and the minimum height coordinate of the application character. The server judges whether the application role is located in the target range or not according to the position of the application role and the coordinate threshold, if the application role is located in the target range, the position of the application role is correct, namely, the application role conforms to game rules, the user does not cheat and is a non-malicious user, and if not, the position of the application role is abnormal and the user is a malicious user.
The above process actually realizes position identification through a coordinate threshold. However, the virtual scene often includes buildings, trees, hills, and the like, and there may be scene areas with large terrain changes. If the house is in front of the application role, the identification method cannot identify whether the application role is located on the wall of the house, so that the accuracy of the abnormal identification of the position of the application role is low.
Disclosure of Invention
The embodiment of the invention provides a method and a device for identifying the position abnormality of an application role, electronic equipment and a storage medium, which can solve the problem of low accuracy of identification of the position abnormality of the application role in the related technology.
The technical scheme is as follows:
in one aspect, a method for identifying an application role position anomaly is provided, and the method includes:
acquiring a first virtual scene of a target application, wherein the first virtual scene is used for simulating a second virtual scene displayed on first equipment by the target application;
acquiring first position information of at least one target application role in the second virtual scene;
and determining a position identification result of the at least one target application role according to the first position information of the at least one target application role and at least one scene object in the first virtual scene, wherein the position identification result is used for indicating whether the position of the at least one target application role is abnormal or not.
In a possible implementation manner, the adding the at least one scene object to the target virtual space according to the second position information of the at least one scene object, and obtaining the first virtual scene of the target application includes:
in the target virtual space, according to a plurality of vertexes corresponding to the at least one scene object, connecting the plurality of vertexes according to a connecting line sequence indicated by a sequence number corresponding to each vertex to obtain the at least one scene object, wherein the plurality of vertexes are used for indicating the position, the shape and the direction of the scene object in the first virtual scene;
setting material information of the at least one scene object in the physics engine component.
In a possible implementation manner, the adding the at least one scene object to the target virtual space according to the second position information of the at least one scene object, and obtaining the first virtual scene of the target application includes:
when the first virtual scene is different from the second virtual scene in size, scaling second position information of the at least one scene object according to a scaling coefficient of the first virtual scene relative to the second virtual scene;
and adding the at least one scene object to the target virtual space based on the scaled second position information to obtain the first virtual scene.
In a possible implementation manner, the performing, according to the first location information of the at least one target application role and the third location information of the at least one scene object, location anomaly identification on the at least one target application role based on a target identification policy includes:
determining starting point coordinates and ray vectors of the at least one target application role based on a plurality of first position information of a plurality of continuous acquisition times of the target application role;
identifying whether the at least one target application role collides with at least one surrounding scene object according to the starting point coordinates, the ray vector and third position information of the at least one surrounding scene object of the target application role;
when the at least one target application role collides with the at least one scene object around, determining that the position of the at least one target application role overlaps with the position of any one scene object.
In a possible implementation manner, the performing, according to the first location information of the at least one target application role and the third location information of the at least one scene object, location anomaly identification on the at least one target application role based on a target identification policy includes:
generating a three-dimensional object of the at least one target application role corresponding to the first virtual scene according to the first position information of the at least one target application role;
identifying whether the at least one scene object overlaps with the three-dimensional stereoscopic object according to the third position information of the three-dimensional stereoscopic object and the at least one scene object;
when the at least one scene object overlaps the three-dimensional stereo object, determining that the position of the at least one target application character overlaps the position of any one scene object.
In a possible implementation manner, the determining, according to the first location information of the at least one target application role and the at least one scene object in the first virtual scene, a location identification result of the at least one target application role includes:
determining third position information of the at least one scene object in the first virtual scene;
extracting position information of an associated object of the target application role from the first position information of the at least one target application role;
and performing position anomaly identification on the associated object according to the position information of the associated object and the third position information of the at least one scene object, and determining the position anomaly of the at least one target application role when the position of the associated object is overlapped with the position of any scene object.
In another aspect, a method for identifying an application role position anomaly is provided, where the method includes:
acquiring second position information of at least one scene object of a target application based on a physical engine component of the target application, wherein the second position information is used for indicating the position of the scene object in a second virtual scene displayed on a first device by the target application, and the physical engine component is used for indicating the storage address of the second position information;
storing the second position information of the at least one scene object into a target resource file according to a target format model;
and sending the target resource file to a second device, wherein the target resource file is used for indicating that a first virtual scene is established on the second device, and identifying the position of a target application role based on the first virtual scene.
In another aspect, an apparatus for recognizing application role position abnormality is provided, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first virtual scene of a target application, and the first virtual scene is used for simulating a second virtual scene displayed on first equipment by the target application;
the obtaining module is further configured to obtain first location information of at least one target application role in the second virtual scene;
a determining module, configured to determine a position identification result of the at least one target application role according to the first position information of the at least one target application role and the at least one scene object in the first virtual scene, where the position identification result is used to indicate whether the position of the at least one target application role is abnormal.
In another aspect, an apparatus for recognizing application role position abnormality is provided, the apparatus comprising:
the acquisition module is used for acquiring second position information of at least one scene object of a target application based on a physical engine component of the target application, wherein the second position information is used for indicating the position of the scene object in a second virtual scene displayed on first equipment by the target application, and the physical engine component is used for indicating the storage address of the second position information;
the storage module is used for storing the second position information of the at least one scene object into a target resource file according to the target format model;
and the sending module is used for sending the target resource file to the second equipment, wherein the target resource file is used for indicating that a first virtual scene is established on the second equipment, and the position of the target application role is identified based on the first virtual scene.
In another aspect, an electronic device is provided and includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories and loaded and executed by the one or more processors to implement the operations performed by the application role position anomaly identification method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the application role position abnormality identification method as described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
simulating a second virtual scene displayed on the first device by the target application through the first virtual scene, and determining a position identification result of at least one target application role according to first position information of the target application role in the second virtual scene and at least one scene object in the first virtual scene. The target application role is identified in the first virtual scene based on the respective scene objects. Therefore, whether the position of the target application role is abnormal or not can be accurately identified based on the scene object, and the accuracy of identifying the position abnormality of the application role is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment applying a role position anomaly identification method according to an embodiment of the present invention;
fig. 2 is a flowchart of an application role position anomaly identification method according to an embodiment of the present invention;
fig. 3 is a schematic view of a virtual scene according to an embodiment of the present invention;
fig. 4 is a schematic view of a virtual scene according to an embodiment of the present invention;
fig. 5 is a schematic view of a virtual scene according to an embodiment of the present invention;
fig. 6 is a schematic view of a virtual scene according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a notification message display interface according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a notification message display interface according to an embodiment of the present invention;
FIG. 9 is an architecture diagram of an application of role position anomaly identification according to an embodiment of the present invention;
FIG. 10 is a flowchart of an application role position anomaly identification according to an embodiment of the present invention;
fig. 11 is a flowchart of an application role position anomaly identification method according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a data selection interface provided by an embodiment of the invention;
FIG. 13 is a schematic diagram of a scene object provided in an embodiment of the present invention;
FIG. 14 is a schematic diagram of a scene object provided in an embodiment of the present invention;
fig. 15 is a schematic structural diagram of an apparatus for recognizing an abnormality of an application role position according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of an apparatus for recognizing an abnormality of an application role position according to an embodiment of the present invention;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Fig. 1 is a schematic diagram of an implementation environment of an application role position anomaly identification method according to an embodiment of the present invention, and referring to fig. 1, the implementation environment includes: a server 101 and a terminal 102. The server 101 is installed with an identification application, the terminal 102 is installed with a target application, and the server 101 can perform data interaction with the terminal 102 based on the identification application.
The target application comprises a virtual scene, the virtual scene comprises a target application role and at least one scene object, and the target application role is used for representing an avatar of a user in the virtual scene or representing an avatar of a virtual object which has a relationship with the user in the virtual scene, for example, a prop, a virtual pet, or a carrier carried by the user in the virtual scene. The user can control the target character to perform a series of behaviors such as running, jumping and the like in the virtual scene. The identification application is used for identifying the position of the target application role so as to determine whether the position of the target application role is abnormal. The scene object is used to represent an environmental object in a virtual environment simulated by the virtual scene, for example, the scene object may be a tree, a house, a hill, or the like. The target application may be a game application, and in the virtual scene, the position of the target application character needs to be abnormal when the game rule is met, for example, the position of the target application character is located below the ground.
In this embodiment of the present invention, the server 101 may identify the target application role based on the location information of the target application role and the at least one scene object, so as to determine whether the location of the target application role is abnormal. The server 101 may acquire the location information of the at least one scene object from the terminal 102. The terminal 102 may have a data obtaining application installed thereon, and the terminal 102 may obtain the location information of the at least one scene object based on the physical engine component of the target application through the data obtaining application, and send the location information of the at least one scene object to the server 101. The server 101 may create a virtual scene of the target application in the server 101 based on the location information of the at least one scene object, and perform location anomaly identification on the target application role in the virtual scene.
The implementation environment may further include a target device, where the target device may be a server or a terminal, and the target device is configured to provide location information of the target application role. The position information is used for indicating the position of the target application role in the virtual scene, and the position information can be the position coordinates of the target application role in the virtual scene. In a possible implementation scenario, the target device may be a terminal, and during the process of running the target application by the terminal, a user may control the target application character to play a game in the virtual scene, and the server 101 obtains the position information of the target application character from the terminal. In another possible implementation scenario, the target device may be a background server of a target application, and the server 101 obtains the location information of the target application role from the background server, where the server 101 may obtain a historical behavior record of the target application role from the background server, and extract the location information of the target application role from the historical behavior record, where the historical behavior record is used to indicate historical behavior of the target application role in the virtual scene. Or, the background server obtains the location information of the target application role from the terminal in real time, and forwards the location information of the target application role to the server 101.
It should be noted that the virtual scene may be used to simulate a virtual space, the virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as desert, hills, forests, houses, stones, and the like. The specific form of the target application character may be any form, for example, human, animal, etc., and the present invention is not limited thereto. The user may control the target application character to move in the virtual scene, for example, in a shooting game, the user may control the target application character to freely fall, glide or open a parachute to fall in the sky of the virtual scene, to run, jump, crawl, bend over to move on land, or to control the character object to swim, float, or dive in the sea, or of course, the user may also control the character object to move in the virtual scene by riding a vehicle, which is only exemplified in the above-mentioned scene, but the embodiment of the present invention is not limited thereto.
It should be noted that the identification application may be a stand-alone application, or a plug-in installed in a stand-alone application, etc. The server 101 may be a server cluster or a single device. The terminal 102 may be any Device that installs the game application, such as a mobile phone terminal, a PAD (Portable Android Device) terminal, or a computer terminal. The embodiment of the present invention is not particularly limited to this.
Fig. 2 is a flowchart of an application role position abnormality identification method according to an embodiment of the present invention. The method may be applied to a second device, which may be a server, and referring to fig. 2, the method includes:
201. the server acquires second position information of at least one scene object in the second virtual scene based on the physical engine component of the target application.
And the second virtual scene is a virtual scene displayed on the first device by the target application. The physical engine component refers to a component of a physical engine layer of the target application, and the physical engine component is used for indicating a storage address of the second location information of the at least one scene object. The second position information refers to a position of the at least one scene object in the second virtual scene. In this step, the server may obtain, according to the storage address indicated by the physical engine component of the target application, second location information of the at least one scene object from a storage space corresponding to the storage address.
The server may obtain a target resource file of a target format model, where the target format model is a format that can be recognized by the server, and the target resource file stores second location information of the at least one scene object. Of course, the server may obtain the target resource file from the first device installed with the target application. Then this step may be: the server acquires a target resource file from first equipment for installing the target application according to the storage address indicated by the physical engine component of the target application; and the server analyzes the second position information of the at least one scene object from the target resource file according to the target format model. The storage address is an address where the second position information of the at least one scene object is stored on the first device.
It should be noted that the server may send an obtaining instruction to the first device, where the first device obtains the target resource file based on the obtaining instruction, and sends the target resource file to the server, where the obtaining instruction is used to instruct to obtain second location information of at least one scene object of the target application. The process of acquiring the target resource file by the first device is mainly described in steps 1101-1103 included in the file acquisition method of the next method embodiment, and the embodiment of the present invention mainly uses the server side to describe the steps executed by the server.
The second position information refers to position information of an environmental object in the second virtual scene except for the application role, for example, position information of trees, houses and the like in the second virtual scene. The second position information of the scene object may be a position at which a center of gravity of the scene object is located. In one possible implementation, the second position information may be represented by position coordinates, which may be coordinates of the scene object in a three-dimensional coordinate system of the second virtual scene. The position coordinate of the scene object may be a coordinate point of the center of gravity of the scene object in the three-dimensional coordinate system.
In a possible embodiment, the target resource file further comprises shape, orientation or material information of the at least one scene object. Wherein, the shape refers to the outline of the scene object in the second virtual scene, and the direction refers to the orientation of the scene object in the second virtual scene. The material information refers to physical properties such as material and texture of a virtual object represented by the scene object. For example, the material information may include a friction coefficient, an elastic coefficient, and the like of the surface of the scene object. The server may further obtain shape, direction and material information of the at least one scene object from the target resource file according to the target format model. In the target resource file, the shape and the direction of the scene object are stored based on the elements such as the point, the line and the surface which form the scene object, and the server can subsequently reproduce the scene object based on the elements such as the point, the line and the surface. For each scene object, in the target resource file, a vertex and a sequence number can be used to represent a point, a line and a surface element constituting the scene object. The scene object correspondingly comprises a plurality of vertexes, each vertex corresponds to a sequence number, and the sequence number is used for indicating the connection sequence of the vertex corresponding to the sequence number in the plurality of vertexes of the scene object. In this step, for each scene object, the server may analyze second location information, shape, direction, and/or material information of the scene object in the target resource file based on the target format model of the target resource file, and load the second location information, shape, direction, and/or material information into a memory of the server. Wherein, the data in the target resource file may include a vertex and a sequence number for representing the position, shape, and orientation of the scene object. The server loads the vertices and sequence numbers representing the position, shape, orientation of the scene object into memory.
In one possible embodiment, the server has installed thereon an identification application that can be used to identify the location of a target application role in the plurality of applications. In one possible scenario, in the recognition application, a user may trigger a location recognition instruction based on a target application of the plurality of applications. The server performs location identification on the target application role of the target application based on the location identification instruction triggered by the user, and then the step may further be: when a position identification instruction is received, the server acquires second position information of at least one scene object in the second virtual scene based on a physical engine component of a target application, wherein the position identification instruction is used for indicating position identification of a target application role of the target application. The server can display the application identifications of the plurality of applications in an application interface for identifying the applications, and receive the position identification instruction when the server position identifies that the application identification of the target application in the plurality of applications is triggered. Of course, the triggering mode of the position recognition instruction may also be triggering based on the target voice, which is not specifically limited in the embodiment of the present invention.
The target format model can be an obj format model which can be identified by the server, and the obj format model can also be identified by data reading logics of various 3D software, so that the target format model has high universality; and configuring data reading logic corresponding to the obj format model on the server aiming at the obj format model, packaging and transmitting a file by adopting the obj format model, and enabling the server to rapidly read the second position information of the at least one scene object from the target resource file.
Compared with the prior art, the formats of the resource files of various game applications are different, and the server needs to configure various data reading logics for each format, so that the efficiency of identifying the position abnormality of the object is low, and the universality is poor. In the embodiment of the invention, when the second position information is acquired based on the physical engine component, the second position information is packaged into the target resource file according to the obj format model, so that the server only needs to set a data reading logic based on the obj format model. In addition, position abnormity identification can be carried out on target application roles of various game applications at the same time without setting a data reading logic for each game application, so that the efficiency of the position abnormity identification of the object is greatly improved.
202. The server creates a first virtual scene of the target application according to the second position information of the at least one scene object.
The first virtual scene is used for simulating a second virtual scene displayed by the target application on the first device. And the server makes at least one scene object form a first virtual scene of the target application according to the second position information of the at least one scene object.
In this step, the server is installed with an identification application, and the server can perform position anomaly identification on a target application role of the target application based on the identification application. Wherein the server may create a target virtual space in a physics engine component of the server. And the server adds the at least one scene object to the target virtual space according to the second position information of the at least one scene object to obtain a first virtual scene of the target application. The server may establish a three-dimensional coordinate system in the target virtual space, and for each scene object, a process of adding the scene object to the target virtual space by the server may be: the server may add the scene object to a corresponding position of the position coordinate in the target virtual space according to the position coordinate of the scene object. The server adds a plurality of scene objects into the target virtual space one by one according to the adding process, so that the second virtual scene is reproduced as the first virtual scene on the server.
When the first virtual scene is different from the second virtual scene in size, the server scales the position coordinates of the scene object in the second virtual scene according to a scaling coefficient of the first virtual scene relative to the second virtual scene, and then adds the scene object to the target virtual space based on the scaled position coordinates to obtain the scaled first virtual scene. Of course, the server may also create the first virtual scene with the same size as the second virtual scene according to the original size of the second virtual scene. The embodiment of the present invention is not particularly limited to this.
When the target resource file further includes the shape, direction and material information of the scene object, the server further determines the outline of the scene object in the target virtual space, the direction of the scene object and the material of the scene object according to the shape, direction and material information of the scene object. Wherein, the process of adding the scene object to the first virtual scene by the server may include: in the target virtual space, the server may connect, according to a plurality of vertexes corresponding to the at least one scene object, the plurality of vertexes according to a connection line sequence indicated by a corresponding sequence number of each vertex, to obtain the at least one scene object, where the plurality of vertexes are used to indicate a position, a shape, and a direction of the scene object in the first virtual scene; the server sets material information of the at least one scene object in the physics engine component. The server may connect the vertices according to the connection lines indicated by the vertex corresponding sequence numbers to form points, lines, and surface elements, and construct the scene object based on the points, lines, and surface elements. And the server sets the material information of the scene object in the physical engine component. Wherein, the vertex can be a three-dimensional coordinate point in the target virtual scene. The server determines the specific form of each scene object in the first virtual scene according to the adding process, and uniquely determines one scene object in the first virtual scene from a plurality of angles such as position, shape, direction, material and the like.
For the creation process of the target virtual space, the server may create a three-dimensional virtual space in a physical engine component of the server, and set physical parameters of the three-dimensional virtual space, that is, perform initialization processing on the three-dimensional virtual space, so that the three-dimensional virtual space may be createdWhich is used to simulate a real physical space, thereby obtaining the target virtual space. Further, the server may also establish a three-dimensional coordinate system in the target virtual space. The physical parameter may be set based on needs, and this is not specifically limited in the embodiment of the present invention. For example, the physical parameter may include a gravitational acceleration, and the server may set the gravitational acceleration of the target virtual space to 9.8m/s 2 . Of course, the physical parameters may also include air resistance, coefficient of friction, and the like.
The physics engine component may be a Physx physics engine, and in the Physx physics engine, the server may create each scene object based on the vertex and the sequence number and in units of a triangle mesh, and load each scene object into a target virtual space established in the Physx physics engine, to obtain the first virtual scene.
As shown in fig. 3, the server establishes the first virtual scene in the physical engine component, where the first virtual scene may be a virtual grassland scene, and as shown in fig. 3, the first virtual scene is a scene object of a partial virtual grassland scene, where the partial virtual grassland scene may include a tree, a house, and other scene objects. As shown in fig. 4, fig. 4 is a schematic diagram of an actual display effect of a partial virtual grassland scene of the first virtual scene, and an actual scene picture of the first virtual scene can be more clearly shown in fig. 4.
It should be noted that, the step 201-202 is actually an implementation manner of the step "the server obtains the first virtual scene of the target application", and the step 201-202 is to first obtain the second position information of the scene object through the storage space indicated by the physical engine component of the target application, and then create the first virtual scene based on the second position information of the scene object. In another possible implementation, the server may further obtain and store the second location information of the at least one scene object in advance, and then the server directly obtains the second location information of the at least one scene object from the local storage space, and creates the first virtual scene according to the location information of the at least one scene object. Of course, the step of acquiring the first virtual scene of the target application by the server may also include other embodiments, which are not specifically limited in this embodiment of the present invention.
203. The server acquires first position information of at least one target application role in the second virtual scene.
The user of the target application may control the target application role to perform a variety of behaviors in a second virtual scene displayed on a first device of the target application. The server may obtain first location information of at least one virtual user object in the second virtual scene from a target device storing the first location information of the target application role.
The server may obtain the first location information in real time, or the server may further obtain the first location information from a historical behavior record of the target application role. Accordingly, this step can be implemented in the following two ways.
In the first mode, the server receives first location information of the at least one target application role sent by the third device.
The third device is a terminal where the at least one target application role corresponds to the user, or a background server of the target application. The server can identify the position abnormality of the target application role in real time in the real-time game process of the user of the target application.
The first location information may include a location in the second virtual scene of the target application role. The position of the target application role may be: and the third device acquires the position coordinates of the gravity center of the target application role in the second virtual scene in real time and sends the position coordinates to the server. The server receives the position coordinates of the target application role sent by the third device. Wherein the position coordinates may include barycentric coordinates of the target application character.
In a possible implementation, the first location information may further include a location of an associated object of the target application role in the second virtual scene. Then this step may be: the server receives first location information of at least one target application role sent by a third device. The server obtains the position of the target application role in the second virtual scene according to the first position information, or the server obtains the positions of the target application role and the associated object of the target application role in the second virtual scene according to the first position information.
The related object refers to an object having an association relationship with the target application role in aspects of position, behavior logic and the like. In the embodiment of the present invention, the associated object may include, but is not limited to: the position of the target application role, the position related object of the target application role or the behavior display object of the target application role, etc. The part of the target application character can include a limb part of the target application character or a virtual decorative article on the limb part. The location-associated object of the target application role may include, but is not limited to: virtual vehicles, virtual backpacks and the like carried by the target application role; for example, the virtual weapon may be a virtual cutter or the like; the virtual vehicle may be a virtual vehicle, a virtual parachute, a virtual skateboard, etc. The behavior display object of the target application role may include a virtual object that is triggered to be displayed by the target behavior of the target application role. For example, a virtual bullet fired by a target application character or a target object fired by the virtual bullet, as well as an explosive object occurring within an area thrown by the target application character when throwing a virtual grenade or throwing a virtual grenade; as another example, the target application character may be an observation object that can be seen when using a virtual telescope. In addition, the position of the associated object of the target application role may also be represented by using a position coordinate, and the process of acquiring the position of the associated object by the server is the same as the process of acquiring the position of the virtual object, which is not described herein again.
In a second manner, the server receives the historical behavior record of the at least one target application role sent by the fourth device, and acquires the first location information of the at least one target application role from the historical behavior record.
The historical behavior record is used for indicating the historical behavior of the at least one target application role in the second virtual scene, and the fourth device is a background server of the target application.
In this step, the server may send an obtaining request to the fourth device, where the obtaining request is used to request to obtain the historical behavior record of at least one target application role of the target application. The acquisition request may carry an application identifier of the target application. The fourth device receives the acquisition request, and sends the historical behavior record of the at least one target application role to the server based on the acquisition request, and the server receives the historical behavior record of the at least one target application role.
The historical behavior record may be playback data of the target application, the playback data includes first location information in the second virtual scene during the historical behavior of the target application role, and the server may extract the first location information of the target application role from the historical behavior record.
In a possible implementation, the obtaining request may further be used to request to obtain a historical behavior record of at least one target application role that meets a target condition. The acquisition request may also carry the target condition. The target conditions may include, but are not limited to: historical behavior records of the at least one target application role in a target period, or historical behavior records which are exclusively shared by at least one scene and belong to a target object type, and the like. The target time period and the target object type may be set based on needs, which is not specifically limited in the embodiment of the present invention. For example, the target period may be 12 to 24, 20 to 22, etc. points per day. The target object type may be a premium player type whose game level exceeds the target level, or a bone ash level game player type whose game frequency exceeds the target frequency, or the like.
Of course, the server may further obtain historical behavior records of a plurality of target application roles, extract, based on the target condition, a historical behavior record of at least one target application role that meets the target condition from the historical behavior records of the plurality of target application roles, and obtain the first location information of the at least one target application role from the historical behavior record of the at least one target application role that meets the target condition.
The first location information at least includes a location of the target application role in the second virtual scene, and in addition, the first location information may also include a location of an associated object of the target application role in the second virtual scene. The process of the server obtaining the first location information of the at least one target application role is the same as the process of obtaining the first location information in the first mode, and is not described herein again.
In the first aspect, the server may obtain, from the third device, the first location information of at least one target application role that satisfies the target condition, based on the target condition. The process of the server obtaining the first location information of the at least one target application role meeting the target condition from the third device is the same as the process of the second method, and is not repeated here.
In a possible implementation manner, the server may also obtain the shape, the direction, and the like of the at least one target application role from the third device or the fourth device, and taking the third device as an example, the third device obtains the shape and the direction of the at least one target application role and sends the shape and the direction of the at least one target application role to the server. Wherein, the shape of the target application role is used for representing the outline of the target application role, and the direction is used for representing the orientation of the target application role. Of course, the server may also obtain information related to the vitality of the at least one target application character from the third device, for example, information such as the vitality index, the blood volume, the fighting level, and the like of the target application character. The embodiment of the present invention is not particularly limited to this.
In a possible implementation manner, the third device or the fourth device may further send an application identifier of the target application to the server, and the server receives the application identifier sent by the third device or the fourth device, determines, based on the application identifier, a target application corresponding to a target application role, and then performs position anomaly identification on the target application role based on the first virtual scene of the target application.
204. And the server determines the position identification result of the at least one target application role according to the first position information of the at least one target application role and the at least one scene object in the first virtual scene.
The position identification result is used to indicate whether the position of the at least one target application role is abnormal, and whether the position is abnormal may be determined based on a target rule of the target application, where the target rule refers to a rule that a user restricts a behavior of the target application role in the target application. For example, the target rules may include: the target application character needs to detour when encountering an obstacle in the second virtual scene, the target application character can only move on the ground of the second virtual scene, and the like. The server can perform position anomaly identification on the target application role based on the position of the scene object in the first virtual scene. In this step, the server may determine third position information of the at least one scene object in the first virtual scene; the server identifies the abnormal position of the at least one target application role according to the first position information of the at least one target application role and the third position information of the at least one scene object; when the position of the at least one target application role overlaps with the position of any one of the scene objects, determining that the position of the at least one target application role is abnormal.
In a possible embodiment, the server may create a simulated character in the first virtual scene based on the first location information of the at least one target application character, the simulated character being used to simulate the location of the target application character in the second virtual scene. The server can also reconstruct a simulated role with characteristics matched with the target application role in the first virtual scene based on the shape, direction and vitality related information of the at least one target application role, and the server carries out position anomaly identification on the target application role according to the position of the simulated role in the first virtual scene and the position of at least one scene object in the first virtual scene.
In another possible implementation, the server may further perform abnormal recognition of the target application role directly based on the first location information of the target application role. The server can perform position anomaly identification on the target application role based on the positions of all scene objects in the first virtual scene. Or the server may select a scene object around the target application role and perform position anomaly identification on the target application role. The server acquires third position information of at least one scene object within a target scene range of the target application role according to the first position information of the target application role, and performs position anomaly identification on the target application role according to the third position information of at least one scene object within the target scene range and the first position information of the target application role. The target scene range may be a range less than a threshold distance from the target application role. Of course, the target scene range may be set based on needs, for example, the target range may be a circular scene area less than 30 meters away from the target application character.
In a possible implementation manner, when the server further determines the shape, the direction, and the material information of the scene object in the first virtual scene, the server may further perform position anomaly identification on the target application role according to the third position information, the shape, the direction, and the material information of the scene object and the first position information of the target application role, and when the position of the target application role overlaps with the space occupied by the scene object, the server determines that the position of the target application role is anomalous. And when the position of the target application role is not overlapped with the space occupied by the scene object, determining that the position of the target application role is normal.
In the embodiment of the invention, the physical engine component of the server has an identification function, a target identification strategy can be configured in the physical engine component in advance, and the server can identify the position abnormality of the target application role based on the target identification strategy so as to realize the identification function. The process may be: and the server performs position anomaly identification on the at least one target application role based on a target identification strategy of a physical engine component of the first device according to the first position information of the at least one target application role and the third position information of the at least one scene object in the first virtual scene. The target identification policy may include, but is not limited to: target recognition algorithms, recognition cycles, etc. The server performs position anomaly identification on the target application role through the target identification algorithm according to the first position information of the target application role and the third position information of the scene object and the identification period.
The target recognition algorithm may include, among other things, a collision query recognition algorithm, such as a ray collision query algorithm and an overlay query algorithm. The ray collision query algorithm is used for identifying the target application role by defining the coordinates of a starting point and a ray vector. The overlay query algorithm is to identify a target application role by identifying whether the target application role and a scene object are overlaid with each other based on the shape of the scene object. Taking the ray collision query algorithm as an example, for the ray collision query algorithm, the step may be: the server determines starting point coordinates and ray vectors of the target application role based on a plurality of pieces of first position information of a plurality of continuous acquisition times of the at least one target application role; the server identifies whether the at least one target application role collides with at least one scene object around the target application role according to the starting point coordinate, the ray vector and third position information of the at least one scene object around the target application role; when the at least one target application character collides with the surrounding at least one scene object, the server determines that the position of the at least one target application character overlaps with the position of any one scene object. The server may be based on a plurality of first location information of the target application role in consecutive time periods, where each first location information corresponds to an acquisition time, and the acquisition time may be a timestamp of the time when the target application role is at a location corresponding to the first location information. And if the target application role collides with the surrounding scene objects in the first virtual scene, determining that the positions of the target application role and the scene objects are overlapped, and the position of the target application role is abnormal. The server may determine the start point coordinate according to the position information with the earliest acquisition time in the plurality of first position information, and determine the ray vector according to the sequence of the acquisition time and the plurality of first position information other than the first position information with the earliest acquisition time. The server may connect the position coordinates corresponding to the plurality of first position information to establish a ray vector of the target application role.
Of course, the server may also define a callback function, call the target identification algorithm, and identify the target application role. In one possible scenario, the server may identify whether the target application character has a collision with the ground, walls of a house, or other obstacles in the virtual scene based on a ray collision query algorithm based on the coordinates of the starting point and the ray vector of the target application character. Or, the server may also perform enclosure identification on the target application role based on the coverage query algorithm and the ray collision query, that is, identify whether the target application role is enclosed by surrounding obstacles or whether there is an overlapping part. The process may be: the server generates a three-dimensional object of the at least one target application role corresponding to the first virtual scene according to the first position information of the at least one target application role; the server identifies whether the at least one scene object overlaps with the three-dimensional stereo object according to the three-dimensional stereo object and the third position information of the at least one scene object; when the at least one scene object overlaps with the three-dimensional stereoscopic object, the server determines that the position of the at least one target application character overlaps with the position of any one scene object. The three-dimensional object is used for representing the image of the target application role in the first virtual scene, the three-dimensional object can be a capsule body, and the server can generate the capsule body according to a capsule body generation algorithm and the first position information. Certainly, the server may also segment the target application role to obtain a plurality of segmentation sections of the target application role, determine, by using a ray collision query algorithm, whether each segmentation section collides with the scene object based on the starting point coordinates and the ray vector of the target application role, and if so, determine that the segmentation section has a blocking property on the target application role in space. That is, if the target application character is on one side of the scene object, the target application character cannot see objects on the other side of the scene object through the scene object, thereby recognizing the visibility of the target application character.
In a possible implementation manner, if the server further obtains the position of the associated object of the target application role, that is, the first position information further includes the position of the associated object of the target application role, the server may further identify the associated object of the target application role according to the first position information of the target application role and the third position information of the scene object, and when the position of the associated object overlaps with the scene position object, the server determines that the target application role is a malicious object. In a possible scenario, taking the associated object as a virtual vehicle carried by the target application role as an example, when the virtual vehicle overlaps with the scene object, the server determines that the target application role is a malicious object.
As shown in fig. 5, since the virtual scene of the game is reconstructed, the embodiment of the present invention has high accuracy when facing a complex model, for example, in the visibility determination, there is a complex model in the middle of the positions of the two points AB in fig. 5, the complex model is a complex sphere object, and the connecting line between the two points AB in fig. 5 passes right through the outer side of the bottom arc of the model but does not pass through the sphere object, so the connecting line AB is not blocked by an object. That is, the line of sight of the target application character at point a may normally pass through and reach the point B location. Compared with the mode that the complex object is blurred to be square only through a threshold value in the related technology, the AB connection line is wrongly judged to pass through the inside of the complex object, and the embodiment of the invention can more accurately judge that the AB connection line does not pass through the spherical object, so that the judgment result is more in line with the actual scene. Therefore, especially when the virtual scene includes a scene object with a complex structure, compared with the prior art, the method has lower accuracy rate or even can not be judged due to the fact that the judgment is carried out only based on the coordinate threshold, and the method and the device can accurately judge, so that the accuracy of role position abnormity identification can be greatly improved. As shown in fig. 6, fig. 6 is a schematic view of an actual interface of the virtual scene in fig. 5, and an actual scene picture of the virtual scene can be more clearly shown from fig. 6.
Further, when the position of the target application role is abnormal, the server may send a notification message to a fourth device, so that the fourth device takes a corresponding penalty measure for the target application role, or the server may directly send the notification message to the terminal where the target application role is located, and take a corresponding penalty measure for the target application role, for example, perform a number sealing process on a game account where the target application role is located. As shown in fig. 7, the server may send a notification message to the terminal where the target application role is located, the terminal may display the notification message, and the server may directly drop the target application role. As shown in fig. 8, fig. 8 is a schematic diagram of the actual display of the notification message in fig. 7, and the actual display interface of the notification message can be more clearly shown in fig. 8.
As shown in fig. 9, fig. 9 is an overall architecture diagram of the embodiment of the present invention, and as shown in fig. 9, the left side represents a first device where a first device of a target application is located, and the right side represents a server, where the first device of the target application includes a game presentation layer, a game engine layer, and a physical engine layer, and the physical engine layer includes a storage address of object data of the scene object. The first device may obtain location information, a shape, a direction, and/or material information of a scene object based on the storage address, encapsulate the location information, the shape, the direction, and/or the material information into a target resource file according to a target format model, and send the target resource file to a server, where the server creates a first virtual scene in a physical engine component based on the target resource file, and is configured to simulate a second virtual scene of a target application displayed by the first device when the target application runs on the first device. After the server creates the first virtual scene, it may perform position anomaly recognition on each target application character of the target application in the first virtual scene, for example, recognize whether the target application character is hidden, surrounded by obstacles, recognize visibility of the target application character, and the like.
Fig. 10 is a flowchart of identifying an object position anomaly by a server, and based on the technical process described in the above-mentioned steps 201 and 204, the embodiment of the present invention takes the flowchart shown in fig. 10 as an example, and introduces the object position anomaly identification process, as shown in fig. 10, the server initializes a physical scene based on a physical engine component, analyzes position information, shape, direction, material information, and the like of a scene object from a target resource file, reads a vertex and a sequence number used for representing the position information, the shape, the direction, and the like of each scene object into a memory by taking the scene object as a unit, creates each scene object by taking a triangular mesh as a unit according to the vertex and the sequence number in the memory, and loads each scene object into a physx initialized physical scene to obtain a first virtual scene. Then, the server performs position abnormality recognition, for example, ray collision query recognition or coverage query recognition, on the target application character in the first virtual scene based on the position coordinates of the target application character.
In the embodiment of the invention, the server can simulate the second virtual scene displayed by the target application on the first device through the first virtual scene, and determine the position identification result of at least one target application role according to the first position information of the target application role in the second virtual scene and at least one scene object in the first virtual scene.
Fig. 11 is a flowchart of an application role position abnormality identification method according to an embodiment of the present invention, where the method may be applied to a first device, and the first device may be a terminal, as shown in fig. 11, where the method includes:
1101. the terminal acquires second position information of at least one scene object of the target application based on a physical engine component of the target application.
Wherein the second location information is used to indicate a location of a scene object in a second virtual scene displayed on the first device by the target application, and the physics engine component is used to indicate a storage address of the second location information. And the terminal acquires the second position information of the at least one scene object from the storage space corresponding to the storage address according to the storage address indicated by the physical engine component of the target application.
The terminal may obtain the information based on an obtaining request of the server, and the process may be: when the terminal receives an acquisition request of a second device, the terminal acquires scene data of the second virtual scene from a storage address according to the storage address indicated by a physical engine component of the target application, wherein the scene data is used for indicating at least one scene object included in the second virtual scene; the terminal extracts second position information of the at least one scene object from the scene data. The obtaining request is used for requesting to obtain the second position information of the at least one scene object.
In a possible embodiment, the scene data further comprises shape, orientation and/or material information of the scene object, etc. The terminal may further extract second position information, shape, orientation, and/or material information of the at least one scene object from the scene data.
The terminal may be installed with a data acquisition application, and the terminal may selectively acquire scene data of a scene object of a target type based on an object type of the scene object. When an acquisition request of a server is received, the terminal displays a data selection interface; and the terminal acquires target scene data in the second virtual scene from the storage address according to the selected target type in the plurality of object type options in the data selection interface, wherein the target scene data is used for indicating the scene object of the target type in the second virtual scene. Wherein the plurality of object type options are respectively used for indicating a plurality of object types. The plurality of object types may include static objects, which refer to objects that are stationary in the virtual scene, such as houses, trees, stones, and the like, and dynamic objects, which may include objects that move through the virtual scene, such as cars, people, and the like. The scene data may include attribute information of a scene object, where the attribute information includes type information indicating the scene object, and the type information is used to indicate that the object type is a static object or a dynamic object. The terminal can determine a scene object of the target type from the plurality of scene objects according to the type information in the attribute information and the selected target type. The scene data may be shape data.
In a possible implementation manner, the terminal may further determine an acquisition manner of the scene object, where the acquisition manner may include full-volume acquisition and segmentation acquisition. The full-scale acquisition refers to acquiring scene data of all scene objects included in the second virtual scene at one time. The segmentation acquisition refers to firstly segmenting the second virtual scene, and acquiring the scene objects of the scene objects in the second virtual scene for multiple times. For example, the terminal may divide the second virtual scene into a plurality of sub virtual scenes, and obtain position information, direction, shape, and/or material information of a scene object included in one sub virtual scene at a time, with each sub virtual scene as a unit.
In a possible scenario, for a virtual scenario with a small data volume, the terminal may select a full-volume derivation manner to obtain scenario data, for example, a virtual scenario corresponding to a small map; for a virtual scene with a large data volume, the terminal may select a manner of segmentation derivation to acquire scene data, for example, a virtual scene corresponding to a map.
It should be noted that, because the memory space occupied by the small map is small, the terminal can read all the scene data of the small map into the memory at one time, and the large map is partially loaded. Therefore, when the full-scale export is selected, the terminal can directly export all scene data in the memory, and can directly acquire the scene data of the scene objects in the virtual scene in the whole range. For a large map, the entire map needs to be divided into a plurality of regions, and then the object data of the scene object in each region is derived one by one in each region, where the number of divisions is specified by the user, but the user needs to move to the specified map region to ensure that the map of the region is completely loaded into the memory. As shown in fig. 12, the data selection interface is shown in fig. 12, and the user may select to export in obj format or in Pxbin format, which is another format recognizable by the second device. The user may select to acquire a static object or a dynamic object, and select a derivation manner, where direct full-scale derivation means that all the position information, direction, shape, and/or material information, etc. of the scene objects in all the ranges of the second virtual scene are acquired. The data selection interface also comprises the segmentation quantity and a specified export area, and if the user selects the export after the scene segmentation, the user can further specify the quantity required to be segmented, or select the target scene area required to be acquired after the segmentation. The division number may be a number of divisions of the second virtual scene in the horizontal and vertical directions, for example, the division number is 3 × 4, and the second virtual scene is divided into 12 sub-virtual scenes.
In another possible implementation, the terminal may further acquire the terrain in the virtual scene separately from the objects in the virtual scene other than the terrain. When a first acquisition instruction is received, the terminal acquires terrain data in the virtual scene, wherein the first acquisition instruction is used for indicating to acquire scene data corresponding to the terrain in the second virtual scene; and when a second acquisition instruction is received, the terminal acquires terrain data in the virtual scene, wherein the second acquisition instruction is used for indicating to acquire scene data corresponding to scene objects except for terrain in the second virtual scene. It should be noted that, the user may select whether to export the terrain, and the terminal may determine whether the object is the terrain according to the data type of the physical object to perform filtering. The terminal can acquire the scene data corresponding to the terrain in the virtual scene from the memory firstly, then acquire the scene data of other scene objects except the terrain in the virtual scene, and when acquiring the data of other scene objects subsequently, the terminal does not need to acquire the scene data corresponding to the terrain any more, so that the data acquisition time can be greatly saved.
In a possible embodiment, the terminal may also only acquire one or more scene objects in the virtual scene. When the terminal receives a third acquisition instruction, the terminal determines the division number according to the number of the scene objects indicated by the third acquisition instruction, divides the second virtual scene into a plurality of self-virtual scenes according to the division number, and acquires scene data corresponding to a target number of scene objects according to a division derivation mode. Wherein the third acquiring instruction is used for instructing to acquire scene data of a target number of scene objects in the second virtual scene. Wherein the terminal may select the segmentation derivation and select not to acquire the terrain, and the number of segmentations may be set based on the target number of scene objects that need to be acquired. Of course, if the terminal needs to acquire a single scene object, the division number may be larger to ensure that one virtual scene area only includes one scene object, and of course, the specific size is set according to the size of the map, which is not specifically limited in the embodiment of the present invention.
As shown in fig. 13, fig. 13 shows the acquired initial island aircraft, and the terminal may set the division number to be large to ensure that only one virtual aircraft is acquired this time. As shown in fig. 14, fig. 14 is a schematic diagram of an actual display effect of the virtual aircraft in fig. 13, and the actual display effect of the virtual aircraft can be more clearly shown in fig. 14.
1102. And the terminal stores the second position information of the at least one scene object into a target resource file according to the target format model.
The terminal may encapsulate the second location information of the at least one scene object into the target resource file according to the target format model. The terminal may further obtain the shape, direction and/or material information of the scene object. In the physics engine component of the target application, the shape of the scene object may specifically include one or more basic shapes that make up the scene object, for example, a scene object may consist of a cuboid, a sphere, or a cube. The shape may specifically be represented by a shape vector. The direction may specifically be represented by a direction matrix.
In one possible embodiment, information such as barycentric coordinates, shape vectors, direction matrices, and material of the scene object may be stored in the memory address indicated by the physics engine of the target application. In a possible embodiment, the target format model may be an obj format model, the target format model is a model that stores scene objects based on elements such as points, lines, and planes that constitute the scene objects, and the server may subsequently reproduce the scene objects based on the elements such as points, lines, and planes. In this step, the terminal may convert the barycentric coordinate, the shape vector, and the direction matrix into coordinates of a point, a line, and a surface element according to a target format model based on the barycentric coordinate, the shape vector, and the direction matrix, and encapsulate the coordinates of the point, the line, and the surface element in the target resource file. The terminal can determine the outline of the scene object based on the shape vector, determine the orientation of the scene object based on the direction matrix, determine the position of the scene object based on the barycentric coordinate, determine the corresponding point, line and surface element of the scene object in the target resource file according to the outline and the orientation of the scene object, determine the corresponding coordinate of the point, line and surface element based on the position, and accurately determine the concrete representation form of the scene object in the second virtual scene from the three angles of the position, the shape and the direction.
1103. And the terminal sends the target resource file to the second equipment.
And the terminal sends the target resource file to the server, wherein the target resource file is used for indicating the establishment of a first virtual scene on the second equipment, and the position of the target application role is identified based on the first virtual scene. The server receives the target resource file, and performs position anomaly identification on the target application role through the steps 201 and 204 of the object acquisition method embodiment. Wherein, the terminal can also send the application identifier of the target application to the server.
It should be noted that the terminal may obtain scene data of the virtual scene of the target application based on the physical engine component of the target application, and send the target resource file to the server, so that the server may reproduce the virtual scene of the target application in the physical engine component of the server based on the target resource file, perform role position abnormality identification in the virtual scene, and ensure accuracy of the role position abnormality identification. Moreover, the terminal can acquire the scene data of the scene object through a full-scale acquisition and segmentation acquisition mode, so that the terminal can flexibly acquire the scene data of the scene object, can meet the requirements of any scene data needing one or more scene objects, can be suitable for various scenes for acquiring the scene data, and greatly improves the flexibility and the practicability of the file acquisition process.
In the embodiment of the present invention, the terminal may obtain, based on the physical engine component of the target application, second location information of at least one scene object of the target application, and send the target resource file to the server based on the target format model. The original scene data of the scene object can be directly acquired based on the physical engine component, so that the accuracy and the efficiency of acquiring the file are improved. In addition, the terminal can also acquire the target resource file based on the target format model, so that the server can acquire the file capable of being identified, the data reading efficiency is improved, and the application role position abnormity identification efficiency is further improved.
Fig. 15 is a block diagram of an application role position recognition apparatus according to an embodiment of the present invention. The apparatus may be applied to a second device, which may be a server, and referring to fig. 15, the apparatus includes: an acquisition module 1501 and a determination module 1502.
An obtaining module 1501, configured to obtain a first virtual scene of a target application, where the first virtual scene is used to simulate a second virtual scene displayed by the target application on a first device;
the obtaining module 1501 is further configured to obtain first position information of at least one target application role in the second virtual scene;
a determining module 1502, configured to determine, according to the first location information of the at least one target application role and the at least one scene object in the first virtual scene, a location identification result of the at least one target application role, where the location identification result is used to indicate whether the location of the at least one target application role is abnormal.
Optionally, the obtaining module 1501 is further configured to obtain second location information of at least one scene object in the second virtual scene based on a physical engine component of the target application, where the physical engine component is configured to indicate a storage address of the second location information of the at least one scene object; and creating a first virtual scene of the target application according to the second position information of the at least one scene object.
Optionally, the obtaining module 1501 is further configured to obtain a target resource file from a first device in which the target application is installed according to the storage address indicated by the physical engine component of the target application; and according to the target format model, analyzing second position information of the at least one scene object from the target resource file.
Optionally, the target resource file further includes at least one of shape, direction, or material information of the at least one scene object.
Optionally, the obtaining module 1501 is further configured to create a target virtual space in a physical engine component of the second device; and adding the at least one scene object into the target virtual space according to the second position information of the at least one scene object to obtain a first virtual scene of the target application.
Optionally, the obtaining module 1501 is further configured to connect, in the target virtual space, multiple vertices corresponding to the at least one scene object according to a connection line sequence indicated by a sequence number corresponding to each vertex, to obtain the at least one scene object, where the multiple vertices are used to indicate a position, a shape, and a direction of the scene object in the first virtual scene; setting material information of the at least one scene object in the physics engine component.
Optionally, the obtaining module 1501 is further configured to scale second position information of the at least one scene object according to a scaling coefficient of the first virtual scene relative to the second virtual scene when the size of the first virtual scene is different from that of the second virtual scene; and adding the at least one scene object to the target virtual space based on the scaled second position information to obtain the first virtual scene.
Optionally, the determining module 1502 is further configured to determine third position information of the at least one scene object in the first virtual scene; performing position anomaly identification on the at least one target application role based on a target identification strategy according to the first position information of the at least one target application role and the third position information of the at least one scene object; and when the position of the at least one target application role is overlapped with the position of any scene object, determining that the at least one target application role is a malicious object.
Optionally, the determining module 1502 is further configured to determine, based on a plurality of first position information of a plurality of continuous acquisition times of the at least one target application role, a start coordinate and a ray vector of the target application role; identifying whether the at least one target application role collides with at least one scene object around the target application role according to the starting point coordinates, the ray vector and third position information of the at least one scene object around the target application role; when the at least one target application role collides with the surrounding at least one scene object, the position of the at least one target application role is determined to overlap with the position of any one scene object.
Optionally, the determining module 1502 is further configured to generate a three-dimensional stereoscopic object of the at least one target application role corresponding to the first virtual scene according to the first position information of the at least one target application role; identifying whether the at least one scene object overlaps with the three-dimensional stereoscopic object according to the third position information of the three-dimensional stereoscopic object and the at least one scene object; when the at least one scene object overlaps the three-dimensional stereo object, determining that the position of the at least one target application character overlaps the position of any one scene object.
Optionally, the determining module 1502 is further configured to determine third position information of the at least one scene object in the first virtual scene; extracting position information of an associated object of the target application role from the first position information of the at least one target application role; and performing position anomaly identification on the associated object according to the position information of the associated object and the third position information of the at least one scene object, and determining the position anomaly of the at least one target application role when the position of the associated object is overlapped with the position of any scene object.
Optionally, the determining module 1502 is further configured to perform location anomaly identification on the at least one target application role based on an identification function of a physical engine component of the first device according to the first location information of the at least one target application role and the first location information of the at least one scene object in the first virtual scene, where a target identification policy is configured in the physical engine component of the first device.
Optionally, the obtaining module 1501 is further configured to receive first location information of the at least one target application role sent by a third device, where the third device is a terminal where a user corresponding to the at least one target application role is located, or a background server of the target application; and receiving a historical behavior record of the at least one target application role sent by a fourth device, and acquiring first position information of the at least one target application role from the historical behavior record, wherein the historical behavior record is used for indicating the historical behavior of the at least one target application role in the second virtual scene, and the fourth device is a background server of the target application.
In the embodiment of the invention, the server can simulate the second virtual scene displayed on the first device by the target application through the first virtual scene, and determine the position recognition result of the at least one target application role according to the first position information of the target application role in the second virtual scene and the at least one scene object in the first virtual scene.
Fig. 16 is a block diagram of an application role position abnormality recognition apparatus according to an embodiment of the present invention. The apparatus may be applied to a first device, which may be a terminal, and referring to fig. 16, the apparatus includes: an acquiring module 1601, a storing module 1602 and a sending module 1603.
An obtaining module 1601, configured to obtain second location information of at least one scene object of a target application based on a physical engine component of the target application, where the second location information is used to indicate a location of the scene object in a second virtual scene displayed on a first device by the target application, and the physical engine component is used to indicate a storage address of the second location information;
a storage module 1602, configured to store the second location information of the at least one scene object into a target resource file according to the target format model;
a sending module 1603, configured to send the target resource file to the second device, where the target resource file is used to instruct to establish a first virtual scene on the second device, and identify a position of a target application role based on the first virtual scene.
Optionally, the obtaining module 1601 is further configured to, when receiving an obtaining instruction of the second device, obtain, according to a storage address indicated by a physical engine component of the target application, scene data of the second virtual scene from the storage address, where the scene data is used to indicate at least one scene object included in the second virtual scene; second position information of the at least one scene object is extracted from the scene data.
Optionally, the obtaining module 1601 is further configured to extract second position information, shape, direction, and material information of the at least one scene object from the scene data if the scene data further includes shape, direction, and material information of the at least one scene object.
Optionally, the obtaining module 1601 is further configured to display a data selection interface when receiving a obtaining instruction; and acquiring target scene data in the second virtual scene from the storage address according to the selected target type in the plurality of object type options in the data selection interface, wherein the target scene data is used for indicating the scene object of the target type in the second virtual scene.
In the embodiment of the present invention, the terminal may obtain, based on the physical engine component of the target application, second location information of at least one scene object of the target application, and send the target resource file to the server based on the target format model. The original scene data of the scene object can be directly acquired based on the physical engine component, so that the accuracy and the efficiency of acquiring the file are improved. In addition, the terminal can also acquire the target resource file based on the target format model, so that the server can acquire the file capable of being identified, the data reading efficiency is improved, and the application role position abnormity identification efficiency is further improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: the application role position abnormality recognition apparatus provided in the above embodiment is only illustrated by the division of the above functional modules when the role position abnormality recognition is applied, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the application role position abnormality identification device provided in the above embodiment and the application role position abnormality identification method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment, and are not described herein again.
Fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present invention. The terminal 1700 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
In general, terminal 1700 includes: a processor 1701, and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in the wake state and is also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the application role position anomaly identification method provided by the method embodiments of the present application.
In some embodiments, terminal 1700 may also optionally include: a peripheral interface 1703 and at least one peripheral. The processor 1701, the memory 1702 and the peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1704, touch display screen 1705, camera 1706, audio circuitry 1707, positioning components 1708, and power source 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1705 may be one, providing the front panel of terminal 1700; in other embodiments, display 1705 may be at least two, each disposed on a different surface of terminal 1700 or in a folded design; in still other embodiments, display 1705 may be a flexible display disposed on a curved surface or a folded surface of terminal 1700. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, the main camera and the wide-angle camera are fused to realize panoramic shooting and a VR (Virtual Reality) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and can be used for light compensation under different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. The microphones may be provided in plural numbers, respectively, at different portions of the terminal 1700 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used to locate the current geographic Location of the terminal 1700 to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be a Positioning component based on a Global Positioning System (GPS) in the united states and a beidou System in china.
Power supply 1709 is used to provide power to the various components in terminal 1700. The power supply 1709 may be ac, dc, disposable or rechargeable. When power supply 1709 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1700 also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1700. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the touch display screen 1705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1712 may detect a body direction and a rotation angle of the terminal 1700, and the gyro sensor 1712 may cooperate with the acceleration sensor 1711 to acquire a 3D motion of the user on the terminal 1700. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1713 may be disposed on the side frames of terminal 1700 and/or underlying touch display 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal 1700, the user's grip signal to the terminal 1700 can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed at the lower layer of the touch display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1714 may be disposed on the front, back, or side of terminal 1700. When a physical button or vendor Logo is provided on terminal 1700, fingerprint sensor 1714 may be integrated with the physical button or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the touch display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1705 is turned down. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1715.
Proximity sensors 1716, also known as distance sensors, are typically disposed on the front panel of terminal 1700. Proximity sensor 1716 is used to gather the distance between the user and the front face of terminal 1700. In one embodiment, when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually reduced, processor 1701 controls touch display 1705 to switch from a bright screen state to a dark screen state; when proximity sensor 1716 detects that the distance between the user and the front surface of terminal 1700 is gradually increased, processor 1701 controls touch display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is not intended to be limiting with respect to terminal 1700, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 18 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 1800 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1801 and one or more memories 1802, where the memory 1802 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 1801 to implement the application role position abnormality recognition method provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal or a server to perform the application role position anomaly identification method in the above embodiments is also provided. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (20)

1. An application role position abnormity identification method is characterized by comprising the following steps:
acquiring a target resource file from first equipment for installing a target application according to a storage address of second position information of at least one scene object indicated by a physical engine component of the target application, wherein the second position information of the at least one scene object is stored in the target resource file and is stored according to a target format model;
according to a target format model, second position information of the at least one scene object is analyzed from the target resource file, the object type of the at least one scene object conforms to a target type, the target type is selected from a plurality of object type options in a data selection interface displayed by a terminal by a user, the plurality of object type options are used for indicating a plurality of object types, and the plurality of object types comprise static objects and dynamic objects;
according to the second position information of the at least one scene object, creating a first virtual scene of the target application, wherein the first virtual scene is used for simulating a second virtual scene displayed on first equipment by the target application;
acquiring first position information of at least one target application role in the second virtual scene, wherein the first position information is obtained by acquiring the position information of the target application role in a continuous time period;
acquiring third position information of at least one scene object in the target scene range of the target application role in the first virtual scene according to the first position information of the target application role;
and performing position abnormity identification on the target application role according to the first position information of the at least one target application role and the third position information, the shape, the direction and the material information of the at least one scene object in the first virtual scene, and determining the position abnormity of the target application role when the position of the target application role is overlapped with the space occupied by the scene object.
2. The method of claim 1, wherein the target resource file further comprises at least one of shape, orientation, or material information of the at least one scene object.
3. The method of claim 1, wherein the creating the first virtual scene of the target application according to the second position information of the at least one scene object in the second virtual scene comprises:
creating a target virtual space in a physics engine component of a second device;
and adding the at least one scene object into the target virtual space according to the second position information of the at least one scene object to obtain a first virtual scene of the target application.
4. The method of claim 1, further comprising:
performing position anomaly identification on the at least one target application role based on a target identification strategy according to the first position information of the at least one target application role and the third position information of the at least one scene object;
when the position of the at least one target application role is overlapped with the position of any scene object, determining that the position of the at least one target application role is abnormal.
5. The method according to claim 4, wherein the performing location anomaly identification on the at least one target application role based on a target identification policy according to the first location information of the at least one target application role and the location information of the at least one scene object in the first virtual scene comprises:
according to the first position information of the at least one target application role and the first position information of the at least one scene object in the first virtual scene, performing position anomaly identification on the at least one target application role based on an identification function of a physical engine component of the first device, wherein a target identification strategy is configured in the physical engine component of the first device.
6. The method according to claim 1, wherein the obtaining of the first position information of the at least one target application role in the second virtual scene comprises at least one of:
receiving first position information of the at least one target application role sent by third equipment, wherein the third equipment is a terminal where a user corresponding to the at least one target application role is located or a background server of the target application;
and receiving a historical behavior record of the at least one target application role sent by a fourth device, and acquiring first position information of the at least one target application role from the historical behavior record, wherein the historical behavior record is used for indicating the historical behavior of the at least one target application role in the second virtual scene, and the fourth device is a background server of the target application.
7. An application role position abnormity identification method is characterized by comprising the following steps:
acquiring second position information, shape, direction and material information of at least one scene object of a target application based on a physical engine component of the target application, wherein the second position information is used for indicating the position of the scene object in a second virtual scene displayed by a first device of the target application, and the physical engine component is used for indicating the storage address of the second position information;
storing second position information of the at least one scene object into a target resource file according to a target format model, wherein the object type of the at least one scene object conforms to a target type, the target type is selected from a plurality of object type options in a data selection interface displayed by a terminal by a user, the plurality of object type options are used for indicating a plurality of object types, and the plurality of object types comprise static objects and dynamic objects;
sending the target resource file to a second device, wherein the target resource file is used for indicating that a first virtual scene is established on the second device, and identifying the position of a target application role based on the first virtual scene, the first virtual scene is created according to second position information of the at least one scene object, and the first virtual scene is used for simulating a second virtual scene displayed on the first device by the target application; the identifying the position of the target application role comprises: acquiring first position information of at least one target application role in the second virtual scene, wherein the first position information is obtained by acquiring the position information of the target application role in a continuous time period; and performing position abnormity identification on the target application role according to the first position information of the at least one target application role and the third position information, shape, direction and material information of at least one scene object in the first virtual scene, and determining the position abnormity of the target application role when the position of the target application role is overlapped with the space occupied by the scene object.
8. The method of claim 7, wherein the obtaining second position information, shape, orientation and material information of at least one scene object of the target application based on the physics engine component of the target application comprises:
when an acquisition instruction of a second device is received, acquiring scene data of the second virtual scene from a storage address indicated by a physical engine component of the target application, wherein the scene data is used for indicating at least one scene object included in the second virtual scene;
second position information, shape, direction, and material information of the at least one scene object are extracted from the scene data.
9. The method of claim 8, wherein the retrieving, when the retrieving instruction is received, scene data of the second virtual scene from a storage address indicated by a physical engine component of the target application comprises:
when an acquisition instruction is received, displaying a data selection interface;
and acquiring target scene data in the second virtual scene from the storage address according to the selected target type in the plurality of object type options in the data selection interface, wherein the target scene data is used for indicating the scene object of the target type in the second virtual scene.
10. An apparatus for recognizing abnormality of application role position, the apparatus comprising:
the acquisition module is used for acquiring a target resource file from first equipment for installing the target application according to a storage address of second position information of at least one scene object indicated by a physical engine component of the target application, wherein the second position information of the at least one scene object is stored in the target resource file, and the second position information is stored according to a target format model;
according to a target format model, second position information of the at least one scene object is analyzed from the target resource file, the object type of the at least one scene object conforms to a target type, the target type is selected from a plurality of object type options in a data selection interface displayed by a terminal by a user, the plurality of object type options are used for indicating a plurality of object types, and the plurality of object types comprise static objects and dynamic objects;
creating a first virtual scene of the target application according to the second position information of the at least one scene object, wherein the first virtual scene is used for simulating a second virtual scene displayed on the first device by the target application;
the obtaining module is further configured to obtain first location information of at least one target application role in the second virtual scene, where the first location information is obtained by collecting location information of the target application role in a continuous time period;
acquiring third position information of at least one scene object in the target scene range of the target application role in the first virtual scene according to the first position information of the target application role;
and the determining module is used for performing position abnormity identification on the target application role according to the first position information of the at least one target application role and the third position information, the shape, the direction and the material information of the at least one scene object in the first virtual scene, and determining the position abnormity of the target application role when the position of the target application role is overlapped with the space occupied by the scene object.
11. The apparatus of claim 10, wherein the target resource file further comprises at least one of shape, orientation, or material information of the at least one scene object.
12. The apparatus of claim 10, wherein the obtaining module is further configured to:
creating a target virtual space in a physics engine component of a second device;
and adding the at least one scene object into the target virtual space according to the second position information of the at least one scene object to obtain a first virtual scene of the target application.
13. The apparatus of claim 10, wherein the determining module is configured to:
performing position anomaly identification on the at least one target application role based on a target identification strategy according to the first position information of the at least one target application role and the third position information of the at least one scene object;
when the position of the at least one target application role overlaps with the position of any one scene object, determining that the position of the at least one target application role is abnormal.
14. The apparatus of claim 13, wherein the determining module is further configured to:
performing position anomaly identification on the at least one target application role based on an identification function of a physical engine component of the first device according to the first position information of the at least one target application role and the first position information of the at least one scene object in the first virtual scene, wherein a target identification strategy is configured in the physical engine component of the first device.
15. The apparatus of claim 10, wherein the obtaining module is further configured to:
receiving first position information of the at least one target application role sent by third equipment, wherein the third equipment is a terminal where a user corresponding to the at least one target application role is located or a background server of the target application;
and receiving a historical behavior record of the at least one target application role sent by a fourth device, and acquiring first position information of the at least one target application role from the historical behavior record, wherein the historical behavior record is used for indicating the historical behavior of the at least one target application role in the second virtual scene, and the fourth device is a background server of the target application.
16. An apparatus for recognizing abnormality of application role position, the apparatus comprising:
the acquisition module is used for acquiring second position information, shape, direction and material information of at least one scene object of a target application based on a physical engine component of the target application, wherein the second position information is used for indicating the position of the scene object in a second virtual scene displayed by first equipment of the target application, and the physical engine component is used for indicating the storage address of the second position information;
a storage module, configured to store the second location information of the at least one scene object into a target resource file according to a target format model, where an object type of the at least one scene object conforms to a target type, the target type is selected by a user from multiple object type options in a data selection interface displayed by a terminal, the multiple object type options are used to indicate multiple object types, and the multiple object types include a static object and a dynamic object;
a sending module, configured to send the target resource file to a second device, where the target resource file is used to indicate that a first virtual scene is established on the second device, and identify a position of a target application role based on the first virtual scene, where the first virtual scene is created according to second position information of the at least one scene object, and the first virtual scene is used to simulate a second virtual scene displayed by the target application on the first device; the identifying the position of the target application role comprises: acquiring first position information of at least one target application role in the second virtual scene, wherein the first position information is obtained by acquiring the position information of the target application role in a continuous time period; and performing position abnormity identification on the target application role according to the first position information of the at least one target application role and the third position information, shape, direction and material information of at least one scene object in the first virtual scene, and determining the position abnormity of the target application role when the position of the target application role is overlapped with the space occupied by the scene object.
17. The apparatus of claim 16, wherein the obtaining module is further configured to:
when an acquisition instruction of a second device is received, acquiring scene data of the second virtual scene from a storage address indicated by a physical engine component of the target application, wherein the scene data is used for indicating at least one scene object included in the second virtual scene;
second position information, shape, orientation and material information of the at least one scene object are extracted from the scene data.
18. The apparatus of claim 17, wherein the obtaining module is further configured to:
when an acquisition instruction is received, displaying a data selection interface;
and acquiring target scene data in the second virtual scene from the storage address according to the selected target type in the plurality of object type options in the data selection interface, wherein the target scene data is used for indicating the scene object of the target type in the second virtual scene.
19. An electronic device, comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the application role position anomaly recognition method of any one of claims 1 to 9.
20. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by the application role position anomaly identification method according to any one of claims 1 to 9.
CN201910199228.0A 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium Active CN109939442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910199228.0A CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910199228.0A CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109939442A CN109939442A (en) 2019-06-28
CN109939442B true CN109939442B (en) 2022-09-09

Family

ID=67010051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910199228.0A Active CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109939442B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680646B (en) * 2020-06-11 2023-09-22 北京市商汤科技开发有限公司 Action detection method and device, electronic equipment and storage medium
CN112717404B (en) * 2021-01-25 2022-11-29 腾讯科技(深圳)有限公司 Virtual object movement processing method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4385863B2 (en) * 2004-06-23 2009-12-16 株式会社セガ Online game fraud detection method
US9056248B2 (en) * 2008-12-02 2015-06-16 International Business Machines Corporation System and method for detecting inappropriate content in virtual worlds
US8948501B1 (en) * 2009-12-22 2015-02-03 Hrl Laboratories, Llc Three-dimensional (3D) object detection and multi-agent behavior recognition using 3D motion data
CN101788909B (en) * 2010-01-28 2012-12-05 北京天空堂科技有限公司 Solving method and device of network game server end walking system
CN104932872A (en) * 2014-03-18 2015-09-23 腾讯科技(深圳)有限公司 Message processing method and server
CN106955493A (en) * 2017-03-30 2017-07-18 北京乐动卓越科技有限公司 The method of calibration that role moves in a kind of 3D online games
CN108629180B (en) * 2018-03-29 2020-12-11 腾讯科技(深圳)有限公司 Abnormal operation determination method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN109939442A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN108694073B (en) Control method, device and equipment of virtual scene and storage medium
CN110102052B (en) Virtual resource delivery method and device, electronic device and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN113041620B (en) Method, device, equipment and storage medium for displaying position mark
CN112156464A (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN111026318A (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN113577765A (en) User interface display method, device, equipment and storage medium
CN111325822B (en) Method, device and equipment for displaying hot spot diagram and readable storage medium
CN113398572A (en) Virtual item switching method, skill switching method and virtual object switching method
CN109939442B (en) Application role position abnormity identification method and device, electronic equipment and storage medium
CN111760281A (en) Method and device for playing cut-scene animation, computer equipment and storage medium
CN111589141A (en) Virtual environment picture display method, device, equipment and medium
CN112306332B (en) Method, device and equipment for determining selected target and storage medium
CN111035929B (en) Elimination information feedback method, device, equipment and medium based on virtual environment
CN113181647A (en) Information display method, device, terminal and storage medium
CN112950753A (en) Virtual plant display method, device, equipment and storage medium
CN113194329B (en) Live interaction method, device, terminal and storage medium
CN112717381B (en) Virtual scene display method and device, computer equipment and storage medium
CN113018865A (en) Climbing line generation method and device, computer equipment and storage medium
CN113559494A (en) Virtual item display method, device, terminal and storage medium
CN112717393A (en) Virtual object display method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant