CN109939442A - Using character location abnormality recognition method, device, electronic equipment and storage medium - Google Patents

Using character location abnormality recognition method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109939442A
CN109939442A CN201910199228.0A CN201910199228A CN109939442A CN 109939442 A CN109939442 A CN 109939442A CN 201910199228 A CN201910199228 A CN 201910199228A CN 109939442 A CN109939442 A CN 109939442A
Authority
CN
China
Prior art keywords
target application
virtual scene
scenario objects
location information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910199228.0A
Other languages
Chinese (zh)
Other versions
CN109939442B (en
Inventor
吴凯
殷赵辉
彭青白
何小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Information Technology Co Ltd filed Critical Shenzhen Tencent Information Technology Co Ltd
Priority to CN201910199228.0A priority Critical patent/CN109939442B/en
Publication of CN109939442A publication Critical patent/CN109939442A/en
Application granted granted Critical
Publication of CN109939442B publication Critical patent/CN109939442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of application character location abnormality recognition method, device, electronic equipment and storage mediums, belong to Internet technology neck.The present invention simulates the second virtual scene that the target application is shown on the first device by the first virtual scene, and the first location information according to target application role in the second virtual scene, with at least one scenario objects in first virtual scene, the position recognition result of at least one target application role is determined.Due to being based on each scenario objects, position identification is carried out to target application role in the first virtual scene.Therefore, accurately can identify whether the position of target application role is abnormal, substantially increases the accuracy using character location anomalous identification based on scenario objects.

Description

Using character location abnormality recognition method, device, electronic equipment and storage medium
Technical field
The present invention relates to Internet technical field, in particular to a kind of application character location abnormality recognition method, device, electricity Sub- equipment and storage medium.
Background technique
In some game applications, generallys use and live in the virtual scene of game application using character representation user It is dynamic.It runs, jump in virtual scene using role for example, user can control.It, need to be using role in virtual scene Carry out activity in the range of game rule limitation.However, certain malicious users take fraudulent means, so that using the activity of role Range exceeds the range of game rule limitation, for example, escaping into underground, passing through wall, be parked in high-altitude etc..This can seriously affect trip The public praise of play application.In this field, it usually needs identified to using role, to avoid cheating.
It in the related technology, can be with using character location identification process are as follows: server obtains the coordinate threshold value for applying role, should Coordinate threshold value is used to indicate the target zone for applying role that can reach in the position.For example, two are made battlefield using role Jing Zhong, a certain coordinate threshold value using role may include the maximum height coordinate and minimum altitude coordinate of applying role.It should Server judges whether this is located in the target zone using role according to the position using role and the coordinate threshold value, such as This applies role in the target zone to fruit, then this is correct using the position of role, that is to say and meets game rule, which does not have There is cheating, be non-malicious user, otherwise, this applies the malposition of role, which is malicious user.
The above process realizes position identification indeed through coordinate threshold value.However, often further including building in virtual scene Object, trees, massif etc. are built, and is also likely to be present the biggish scene areas of topography variation.If using having house in face of role, Above-mentioned recognition methods cannot then identify the wall for whether being located at house using role, abnormal so as to cause above-mentioned application character location The accuracy of identification is lower.
Summary of the invention
The embodiment of the invention provides a kind of application character location abnormality recognition method, device, electronic equipment and storages to be situated between Matter is able to solve problem lower using the accuracy of character location anomalous identification in the related technology.The technical solution is as follows:
On the one hand, a kind of application character location abnormality recognition method is provided, which comprises
The first virtual scene of target application is obtained, first virtual scene is for simulating the target application first The second virtual scene shown in equipment;
Obtain first location information of at least one target application role in second virtual scene;
According at least one in the first location information of at least one target application role and first virtual scene A scenario objects determine the position recognition result of at least one target application role, and the position recognition result is for referring to Show whether the position of at least one target application role is abnormal.
In one possible implementation, the second location information of at least one scenario objects according to, will At least one described scenario objects are added in the destination virtual space, obtain the first virtual scene packet of the target application It includes:
It, will be described more according to the corresponding multiple vertex of at least one described scenario objects in the destination virtual space A vertex is attached according to the sequence of line indicated by each vertex correspondence serial number, obtains at least one described scenario objects, The multiple vertex is used to indicate scenario objects in position, shape and the direction of first virtual scene;
The material information of at least one scenario objects is set in the physical engine component.
In one possible implementation, the second location information of at least one scenario objects according to, will At least one described scenario objects are added in the destination virtual space, obtain the first virtual scene packet of the target application It includes:
When first virtual scene and second virtual scene are of different sizes, according to the first virtual scene phase For the second virtual scene zoom factor, the second location information of at least one scenario objects is zoomed in and out;
Based on the second location information after scaling, at least one described scenario objects are added to the destination virtual space In, obtain first virtual scene.
In one possible implementation, the first location information of at least one target application role according to With the third place information of at least one scenario objects, it is based on target identification strategy, at least one described target application Role carries out malposition identification
Multiple first location informations determine The starting point coordinate and radiation vector of the target application role;
According at least one scenario objects around the starting point coordinate, the radiation vector and the target application role The third place information, identify whether at least one target application role and described at least one scenario objects around occur Collision;
When at least one target application role and described at least one scenario objects of surrounding collide, institute is determined The position for stating at least one target application role is Chong Die with the position of any one scenario objects.
In one possible implementation, the first location information of at least one target application role according to With the third place information of at least one scenario objects, it is based on target identification strategy, at least one described target application Role carries out malposition identification
According to the first location information of at least one target application role, at least one described target application angle is generated The corresponding 3 D stereo object in first virtual scene of color;
According to the third place information of the 3 D stereo object and at least one scenario objects, identification is described at least Whether one scenario objects is Chong Die with the 3 D stereo object;
When at least one described scenario objects are Chong Die with the 3 D stereo object, determine that at least one described target is answered It is Chong Die with the position of role and the position of any one scenario objects.
In one possible implementation, the first location information of at least one target application role according to With at least one scenario objects in first virtual scene, determine that the position of at least one target application role identifies knot Fruit includes:
Determine the third place information of at least one scenario objects in first virtual scene;
From the first location information of at least one target application role, the association of the target application role is extracted The location information of object;
According to the third place information of the location information of the affiliated partner and at least one scenario objects, to described Affiliated partner carries out malposition identification, when the position of the affiliated partner is Chong Die with the position of any one scenario objects, really The malposition of fixed at least one target application role.
On the other hand, a kind of application character location abnormality recognition method is provided, which comprises
Physical engine component based on target application, obtains the second of at least one scenario objects of the target application Confidence breath, the second location information are used to indicate the second void that scenario objects are shown on the first device in the target application Position in quasi- scene, the physical engine component are used to indicate the storage address of the second location information;
According to object format model, the second location information of at least one scenario objects is stored to target resource text In part;
The target resource file is sent to the second equipment, the target resource file is used to indicate to be built on the second device Vertical first virtual scene, identifies the position of target application role based on first virtual scene.
On the other hand, a kind of application character location anomalous identification device is provided, described device includes:
Module is obtained, for obtaining the first virtual scene of target application, first virtual scene is described for simulating The second virtual scene that target application is shown on the first device;
The acquisition module is also used to obtain first of at least one target application role in second virtual scene Location information;
Determining module, virtually for the first location information of at least one target application role according to and described first At least one scenario objects in scene, determine the position recognition result of at least one target application role, and the position is known Whether the position that other result is used to indicate at least one target application role is abnormal.
On the other hand, a kind of application character location anomalous identification device is provided, described device includes:
It obtains module and obtains at least one field of the target application for the physical engine component based on target application The second location information of scape object, the second location information are used to indicate scenario objects in the target application in the first equipment Position in second virtual scene of upper display, the physical engine component is with being used to indicate the storage of the second location information Location;
Memory module, for according to object format model, the second location information of at least one scenario objects to be deposited Storage is into target resource file;
Sending module, for sending the target resource file to the second equipment, the target resource file is used to indicate The first virtual scene is established on the second device, and the position of target application role is known based on first virtual scene Not.
On the other hand, provide a kind of electronic equipment, the electronic equipment include one or more processors and one or Multiple memories are stored at least one instruction in one or more of memories, and at least one instruction is by described one A or multiple processors are loaded and are executed to realize as above-mentioned using operation performed by character location abnormality recognition method.
On the other hand, a kind of computer readable storage medium is provided, at least one finger is stored in the storage medium It enables, at least one instruction is loaded by processor and executed to realize that above-mentioned such as applies character location abnormality recognition method institute The operation of execution.
Technical solution bring beneficial effect provided in an embodiment of the present invention at least may include:
Simulate the second virtual scene that the target application is shown on the first device by the first virtual scene, and according to Target application role in the second virtual scene first location information and first virtual scene at least one scene pair As determining the position recognition result of at least one target application role.Due to being based on each scenario objects, in the first virtual field Target application role is identified in scape.Therefore, accurately it can identify target application role's based on scenario objects Whether position is abnormal, substantially increases the accuracy using character location anomalous identification.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is a kind of signal of implementation environment using character location abnormality recognition method provided in an embodiment of the present invention Figure;
Fig. 2 is a kind of flow chart using character location abnormality recognition method provided in an embodiment of the present invention;
Fig. 3 is a kind of virtual scene schematic diagram provided in an embodiment of the present invention;
Fig. 4 is a kind of virtual scene schematic diagram provided in an embodiment of the present invention;
Fig. 5 is a kind of virtual scene schematic diagram provided in an embodiment of the present invention;
Fig. 6 is a kind of virtual scene schematic diagram provided in an embodiment of the present invention;
Fig. 7 is a kind of notification message display interface schematic diagram provided in an embodiment of the present invention;
Fig. 8 is a kind of notification message display interface schematic diagram provided in an embodiment of the present invention;
Fig. 9 is a kind of architecture diagram using character location anomalous identification provided in an embodiment of the present invention;
Figure 10 is a kind of flow chart using character location anomalous identification provided in an embodiment of the present invention;
Figure 11 is a kind of flow chart using character location abnormality recognition method provided in an embodiment of the present invention;
Figure 12 is a kind of data selection interface schematic diagram provided in an embodiment of the present invention;
Figure 13 is a kind of scenario objects schematic diagram provided in an embodiment of the present invention;
Figure 14 is a kind of scenario objects schematic diagram provided in an embodiment of the present invention;
Figure 15 is a kind of structural schematic diagram using character location anomalous identification device provided in an embodiment of the present invention;
Figure 16 is a kind of structural schematic diagram using character location anomalous identification device provided in an embodiment of the present invention;
Figure 17 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention;
Figure 18 is a kind of structural schematic diagram of server provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Fig. 1 is a kind of signal of implementation environment using character location abnormality recognition method provided in an embodiment of the present invention Figure, referring to Fig. 1, which includes: server 101 and terminal 102.Identification application, the end are installed on the server 101 Target application is installed on end 102, which can be based on the identification application, carry out data interaction with the terminal 102.
Include virtual scene in the target application, includes target application role and at least one scene pair in the virtual scene As target application role is used to represent virtual image of the user in the virtual scene, or for representing in virtual field The image of the relevant virtual objects of Jing Zhongyu user, for example, the user possess in virtual scene stage property, virtually dote on Object, or the carrier carried etc..User can control the target and one system such as be run, jumped in the virtual scene with role Column behavior.The identification for the position to target application role using identifying, to determine the position of target application role It whether abnormal sets.Wherein, which is used to indicate the environmental objects in virtual environment that the virtual scene is simulated, example Such as, which can be trees, house, massif etc..The target application can be game application, should in virtual scene The position of target application role needs meeting game rule, for example, when the position of target application role locates below ground level, then Malposition.
In the embodiment of the present invention, the server 101 can location information based on target application role and this at least one A scenario objects identify target application role, to judge whether the position of target application role is abnormal.It should Server 101 can obtain the location information of at least one scenario objects from the terminal 102.It can pacify in the terminal 102 Equipped with data acquisition application, which can be by the data acquisition application, the physical engine group based on the target application Part obtains the location information of at least one scenario objects, and the location information of at least one scenario objects is sent to clothes Business device 101.The server 101 can create the mesh based on the location information of at least one scenario objects in server 101 The virtual scene for marking application carries out malposition identification to target application role in the virtual scene.
Wherein, which can also include target device, which can be server or terminal, the target Equipment is for providing the location information of target application role.The location information is used to indicate target application role in virtual scene Position, the location information can for target application role virtual scene position coordinates.In a kind of possible implementation field Jing Zhong, the target device can be terminal, and during the terminal operating target application, user can control the target application angle Color carries out game in the virtual scene, which obtains the location information of target application role from the terminal.? In alternatively possible implement scene, which can be the background server of target application, and the server 101 is after this The location information of target application role is obtained in platform server, wherein the server 101 can be obtained from the background server The historical behavior of target application role is taken to record, the position that target application role is extracted from historical behavior record is believed Breath, historical behavior record are used to indicate historical behavior of the target application role in the virtual scene.Alternatively, the backstage takes Business device obtains the location information of target application role in real time from terminal, and the location information of target application role is forwarded to The server 101.
It should be noted that the virtual scene can be used for simulating a Virtual Space, which can be one Open space, the virtual scene can be used for simulating the true environment in reality, for example, may include day in the virtual scene Sky, land, ocean etc., the land may include the environmental elements such as desert, massif, forest, house, stone.The target application angle The specific form of color can be any form, for example, people, animal etc., the present invention does not limit this.User can control target It is moved in the virtual scene using role, by taking shooting game as an example, user can control target application role at this The aerial free-falling in the day of virtual scene glides or releases a parachute and falls etc., runs, beats, climbs in land Row is bent over forward etc., also can control character object in ocean went swimming, floating or dive etc., certainly, user can also be controlled Character object processed is taken carrier and is moved in the virtual scene, is only illustrated herein with above-mentioned scene, the present invention Embodiment is not especially limited this.
It should be noted that the identification application can be an independent application program, or to be mounted on independent answer With the plug-in unit etc. in program.The server 101 can be server cluster, or individual equipment.The terminal 102 can be Mobile phone terminal, any installation trip of PAD (Portable Android Device, tablet computer) terminal or computer terminal etc. The equipment of play application.The embodiment of the present invention is not specifically limited in this embodiment.
Fig. 2 is a kind of flow chart using character location abnormality recognition method provided in an embodiment of the present invention.This method can On the second device with application, which can be server, referring to fig. 2, this method comprises:
201, physical engine component of the server based on target application obtains at least one scene in second virtual scene The second location information of object.
Wherein, which is the virtual scene that the target application is shown on the first device.The physical engine Component refers to the component of the physical engine layer of the target application, which is used to indicate at least one scenario objects The storage address of second location information.The second location information refers at least one scenario objects in second virtual scene Position.In this step, the server can the storage address according to indicated by the physical engine component of the target application, from this In the corresponding memory space of storage address, the second location information of at least one scenario objects is obtained.
Wherein, the target resource file of the available object format model of the server, the object format model are the clothes The format that can identify of business device is stored with the second location information of at least one scenario objects in the target resource file.When So, which can obtain the target resource file from the first equipment for being equipped with the target application.Then this step can be with Are as follows: server storage address according to indicated by the physical engine component of the target application, from installing the of the target application Target resource file is obtained in one equipment;The server parses this from the target resource file according to object format model The second location information of at least one scenario objects.Wherein, which is to store at least one field in first equipment The address of the second location information of scape object.
It should be noted that the server can send acquisition instruction to first equipment, which is obtained based on this Instruction fetch obtains the target resource file, and sends the target resource file to server, which is used to indicate acquisition The second location information of at least one scenario objects of the target application.Wherein, which obtains the target resource file Process, mainly introduced in the step 1101-1103 included by the file acquisition method of next embodiment of the method, this Inventive embodiments are introduced the step as performed by server based on server side.
Wherein, which refers to the position of the environmental objects in second virtual scene in addition to application role Information, for example, the location information in the second virtual scene such as trees, house in the second virtual scene.The second of scenario objects Location information can be the position where the center of gravity of the scenario objects.In a kind of possible embodiment, the second confidence Breath can indicate that the position coordinates can be three-dimensional coordinate of the scenario objects in second virtual scene using position coordinates Coordinate in system.Wherein, the position coordinates of the scenario objects can be the center of gravity of the scenario objects in the three-dimensional system of coordinate Coordinate points.
In a kind of possible embodiment, the target resource file further include at least one scenario objects shape, Direction or material information.Wherein, which refers to the appearance profile that scenario objects are presented in second virtual scene, the party To referring to direction of the scenario objects in second virtual scene.The material information refers to virtual represented by the scenario objects The physical attribute of the material of object, quality etc..For example, the material information may include the friction on the surface of the scenario objects Coefficient, coefficient of elasticity etc..Then the server can also be according to the object format model, and obtaining from the target resource file should be to Shape, direction and the material information of few scenario objects.The target resource file be based on constitute scenario objects point, The elements such as line, face store the shape of the scenario objects, direction, and the server is subsequent can be based on point, line, surface element, Scenario objects are reappeared.For each scenario objects, in the target resource file, can indicate to constitute using vertex, serial number The point, line, surface element of the scenario objects.Wherein, it includes multiple vertex, one sequence of each vertex correspondence that the scenario objects are corresponding Number, which is used to indicate line sequence of the corresponding vertex of the serial number in multiple vertex that the scenario objects include.Then originally In step, for each scenario objects, which can parse this based on the object format model of the target resource file The second location information of scenario objects, shape, direction and/or material information in target resource file, by the second location information, Shape, direction and/or material information are loaded into the memory of the server.Wherein, the data in the target resource file can be with Including position, shape, the vertex in direction and the serial number for indicating the scenario objects.Then the server will indicate scenario objects Position, shape, the vertex in direction and serial number, are loaded into memory.
In a kind of possible embodiment, identification application is installed on the server, which can be used for pair Target application role in multiple applications carries out position identification.In a kind of possible scene, in identification application, Yong Huke Based on the target application in multiple application, trigger position identification instruction.The position identification that the server is triggered based on user Instruction carries out position identification to the target application role of the target application, then this step can be with are as follows: when receiving position identification When instruction, physical engine component of the server based on target application obtains at least one scenario objects in second virtual scene Second location information, position identification instruction, which is used to indicate, carries out position identification to the target application role of the target application. Wherein, which can show the application identities of multiple application, when the server position in the application interface of identification application It sets when recognizing the application identities of target application in multiple application and being triggered, receives position identification instruction.Certainly, the position Identify that the triggering mode of instruction can also be to be triggered based on target voice, the embodiment of the present invention is not specifically limited in this embodiment.
Wherein, which can be obj format model, which can be known by the server Not, also, the obj format model can also be identified have very high versatility in the reading data logic of a variety of 3D softwares; It is directed to the obj format model on the server, the corresponding reading data logic of obj format model is configured with, using the obj lattice Formula model encapsulation file simultaneously transmits, and the server is allowed to read at least one scene from the target resource file rapidly The second location information of object.
Compared to the prior art, since the format of the resource file of a variety of game applications is different, server needs to divide Safety pin configures a variety of reading data logics to every kind of format, so as to cause object's position anomalous identification efficiency is lower, versatility Also poor.And in the embodiment of the present invention, when due to getting second location information based on physical engine component, first by the second Confidence breath is target resource file according to obj format model encapsulation, so that the server is only needed based on obj format model, setting A kind of reading data logic, the embodiment of the present invention can be adapted for resource file using character location abnormality recognition method The different a variety of game applications of format, improve the versatility using character location abnormality recognition method.Also, without for every A kind of reading data logic is arranged in kind game application, can carry out position to the target application role of a variety of game applications simultaneously Anomalous identification, to substantially increase the efficiency of object's position anomalous identification.
202, server creates the first void of the target application according to the second location information of at least one scenario objects Quasi- scene.
Wherein, first virtual scene is for simulating the second virtual scene that the target application is shown on the first device. At least one scenario objects is formed the target application according to the second location information of at least one scenario objects by the server The first virtual scene.
In this step, identification application is installed on the server, which can apply based on the identification and answer target Target application role carries out malposition identification.Wherein, which can be in the physical engine component of the server Create destination virtual space.The server is according to the second location informations of at least one scenario objects, by least one field Scape object is added in the destination virtual space, obtains the first virtual scene of the target application.Wherein, which can be Three-dimensional system of coordinate is established in the destination virtual space, for each scenario objects, which is added to mesh for the scenario objects The process marked in Virtual Space can be with are as follows: the server can add the scenario objects according to the position coordinates of the scenario objects The position coordinates are added in the corresponding position in the destination virtual space.The server is by multiple scenario objects according to the addition Journey is added to one by one in the destination virtual space, on that server reappear second virtual scene virtual for first Scene.
Wherein, which can be same or different with the second virtual scene size, when this is first virtual When scene and second virtual scene are of different sizes, the server is then according to first virtual scene relative to second virtual field Scape zoom factor zooms in and out position coordinates of the scenario objects in second virtual scene, then based on the position after scaling Coordinate is set, which is added in the destination virtual space, first virtual scene after being scaled.Certainly, should Server can also create and the second virtual scene size on that server according to the original size of second virtual scene Identical first virtual scene.The embodiment of the present invention is not specifically limited in this embodiment.
When in target resource file further including shape, direction and the material information of scenario objects, then the server also root According to the shape of the scenario objects, direction and material information, shape of the scenario objects in the destination virtual space is determined respectively The material of profile, the direction of the scenario objects and the scenario objects.Wherein, which is added to first for the scenario objects The process of virtual scene may include: in destination virtual space, which can be according at least one scenario objects pair The multiple vertex answered, by multiple vertex according to line indicated by each vertex correspondence serial number sequence be attached, obtain to Few scenario objects, multiple vertex is used to indicate scenario objects in the position of first virtual scene, shape and direction;It should The material information of at least one scenario objects is arranged in server in the physical engine component.Wherein, server can basis For indicating the position of the scenario objects, each vertex of shape and direction, by each vertex according to the vertex correspondence serial number institute The line sequence of instruction is attached, and is formed point, line, surface element, is based on the point, line, surface element, constitutes the scenario objects.And And the material information of the scenario objects is arranged in the server in the physical engine component.Wherein, which can be the target Three-dimensional coordinate point in virtual scene.The server determines in each scenario objects according to above-mentioned adding procedure in first void Specific form in quasi- scene uniquely determines in first virtual scene from multiple angles such as position, shape, direction, material One scenario objects.
For the creation process in destination virtual space, which can create in the physical engine component of the server A virtual three dimensional space is built, and the physical parameter of the virtual three dimensional space is set, that is to say and the virtual three dimensional space is carried out Initialization process allows the virtual three dimensional space to be used to simulate true physical space, to obtain destination virtual sky Between.Further, which can also establish three-dimensional system of coordinate in the destination virtual space.Wherein, which can With based on needing to be configured, the embodiment of the present invention is not specifically limited in this embodiment.For example, the physical parameter may include that gravity adds Speed, the acceleration of gravity which can be set the destination virtual space is 9.8m/s2.Certainly, which may be used also To include air drag, coefficient of friction etc..
Wherein, which can be Physx physical engine, and the server can in the Physx physical engine To create each scenario objects, and each scenario objects are loaded into this as unit of triangle gridding based on the vertex and serial number In the destination virtual space established in Physx physical engine, first virtual scene is obtained.
As shown in figure 3, server establishes first virtual scene in the physical engine component, which can Think virtual grassland scene, is illustrated in figure 3 the scenario objects of partial virtual grassland scene, it can in the partial virtual grassland scene To include the scenario objects such as tree, house.As shown in figure 4, Fig. 4 is the reality of the partial virtual grassland scene of first virtual scene Display effect schematic diagram clearer can show the actual scene picture of first virtual scene from Fig. 4.
It should be noted that above-mentioned steps 201-202 is actually that " the first of server acquisition target application is virtual for step A kind of implementation of scene ", above-mentioned steps 201-202 are to first pass through storage indicated by the physical engine component of target application Space obtains the second location information of scenario objects, then the second location information based on scenario objects, creates first virtual field Scape.In alternatively possible embodiment, which can also obtain in advance and be stored at least one scenario objects Second location information, then the server directly obtains the second position of at least one scenario objects from local storage space Information creates first virtual scene according to the location information of at least one scenario objects.Certainly, above-mentioned server obtains The step of first virtual scene of target application, can also include other embodiments, and the embodiment of the present invention is not done this specifically It limits.
203, server obtains first location information of at least one target application role in second virtual scene.
The user of the target application can control target application role, shown in the first equipment of the target application The second virtual scene in execute a variety of behaviors.The server can be from the first location information for being stored with target application role On target device, first location information of at least one virtual subscriber objects in second virtual scene is obtained.
Wherein, which can obtain the first location information in real time, alternatively, the server can also be from target application In the historical behavior record of role, the first location information is obtained.Correspondingly, this step can be real by following two mode It is existing.
First way, the server receive the first position at least one target application role that third equipment is sent Information.
Wherein, which is that at least one target application role corresponds to terminal or the target where user The background server of application.The server in real time can answer the target during user's real-time game of target application Malposition identification is carried out with role.
The first location information may include the position in second virtual scene of target application role.Wherein, should The position of target application role can be with are as follows: the center of gravity of target application role the second virtual scene position coordinates, then this Three equipment obtain the position coordinates in second virtual scene of target application role in real time, and send the position to the server Set coordinate.The server receives the position coordinates of the target application role of third equipment transmission.Wherein, which can To include the barycentric coodinates of target application role.
In a kind of possible embodiment, which can also include the association pair of target application role As the position in second virtual scene.Then this step can be with are as follows: the server receives at least one that third equipment is sent The first location information of target application role.The server obtains target application role at this according to the first location information Position in second virtual scene, alternatively, the server according to the first location information, obtains target application role and should Position of the affiliated partner of target application role in second virtual scene.
Wherein, which refers to that position, Action logic etc. have incidence relation with target application role Object.In the embodiment of the present invention, which be can include but is not limited to: itself position of target application role, the mesh Mark shows object etc. using the position affiliated partner of role or the behavior of target application role.Wherein, target application role Itself position may include decorative article etc. virtual on the body part or body part of target application role.The mesh Mark can include but is not limited to using the position affiliated partner of role: virtual weapons that target application role holds are carried Virtual carrier, virtual knapsack etc.;For example, the virtual weapons can be virtual gun, Virtual tool etc.;The virtual carrier can be Virtual vehicle, virtual parachute, virtual slide plate etc..The behavior of target application role shows that object may include by target application The virtual objects of the triggered display of the goal behavior of role.For example, the virtual bullet or the virtual subnet of target application role transmitting Target object in ejection, for another example, target application role throw virtual grenade or are thrown region when throwing virtual grenade Interior explosion object occurred;For another example, target application role uses Virtual telescope Shi Suoneng observation object seen etc..Separately Outside, the position of the affiliated partner of target application role can also indicate that the server obtains the association using position coordinates The process of the position of object similarly with the process of the position of above-mentioned acquisition virtual objects no longer repeats one by one herein.
The second way, server receive the historical behavior note at least one target application role that the 4th equipment is sent Record obtains the first location information of at least one target application role from historical behavior record.
Wherein, historical behavior record is used to indicate at least one target application role going through in second virtual scene History behavior, the 4th equipment are the background server of the target application.
In this step, which can send acquisition request to the 4th equipment, which should for request The historical behavior of at least one target application role of target application records.The acquisition request can carry answering for the target application With mark.4th equipment receives the acquisition request, and is based on the acquisition request, sends at least one target to the server It is recorded using the historical behavior of role, which receives the historical behavior record of at least one target application role.
Wherein, historical behavior record can be the playback of data of the target application, which includes target application Role is during carrying out historical behavior, first location information in second virtual scene, which can be from the history The first location information of target application role is extracted in behavior record.
In a kind of possible embodiment, which can be also used for request and meets goal condition at least The historical behavior record of one target application role.The acquisition request can also carry the goal condition.The goal condition can be with Including but not limited to: at least one target application role records in the historical behavior of objective time interval, or belongs to target object Historical behavior record that at least one scene of type exclusively enjoys etc..The objective time interval, target object type can based on need into Row setting, the embodiment of the present invention are not specifically limited in this embodiment.For example, the objective time interval can be daily 12 points to 24 points, 20 O'clock to 22 points etc..The target object type can be more than the superior player type or game frequency of goal gradient for game ratings Secondary bone ash grade game player's type etc. more than the target frequency.
Certainly, which can also obtain the historical behavior record of multiple target application roles, be based on the goal condition, From the historical behavior of multiple target application role record, extract at least one target application role's for meeting goal condition Historical behavior record, from the historical behavior at least one target application role for meeting goal condition record in obtain this at least one The first location information of a target application role.
The first location information includes at least position of the target application role in the second virtual scene, in addition, this One location information can also include position of the affiliated partner of target application role in second virtual scene.The server The process of the first location information of at least one target application role is obtained, to obtain first with above-mentioned first way The process of location information similarly, details are not described herein again.
It should be noted that the server can also be based on goal condition, from third equipment in above-mentioned first way It is middle to obtain the first location information for meeting at least one target application role of goal condition.The server is obtained from third equipment Take the process for meeting the first location information of at least one target application role of goal condition, the mistake with the above-mentioned second way Cheng Tongli is no longer repeated one by one herein.
In a kind of possible embodiment, which can also obtain at least one from third equipment or the 4th equipment Shape, the direction etc. of a target application role, by taking third equipment as an example, then the third equipment obtains at least one target application The shape of role and direction, and send to the server shape and the direction of at least one target application role.Wherein, the mesh Mark is used to indicate the appearance profile of target application role using the shape of role, and the direction is for indicating target application role Direction.Certainly, which can also obtain the vitality correlation of at least one target application role from third equipment Information, for example, the information such as the biotic energy index of target application role, blood volume, fight grade.The embodiment of the present invention does not do this It is specific to limit.
In a kind of possible embodiment, the third equipment or the 4th equipment can also send the target to the server The application identities of application, the server receive the application identities that third equipment or the 4th equipment are sent, and are based on the application identities, really It sets the goal using the corresponding target application of role, subsequent the first virtual scene based on the target application, to the target application angle Color carries out malposition identification.
204, server is according in the first location information and first virtual scene of at least one target application role At least one scenario objects determines the position recognition result of at least one target application role.
Wherein, the position recognition result be used to indicate at least one target application role position it is whether abnormal, the position Set whether exception can be determined that the goal rule refers to user in the target application based on the goal rule of the target application Constrained objective applies the rule of the behavior of role.For example, the goal rule may include: target application role in the second virtual field Detour, target application role is needed to be only capable of in land operations of the second virtual scene etc. when encountering barrier in scape.Wherein, the clothes Being engaged in device can be based on scenario objects in the position of first virtual scene, to target application role progress malposition identification. In this step, which can determine the third place information of at least one scenario objects in first virtual scene; The server is according to the first location information of at least one target application role and at least one scenario objects the third place Information carries out malposition identification at least one target application role;When the position of at least one target application role When Chong Die with the position of any one scenario objects, the malposition of at least one target application role is determined.
In a kind of possible embodiment, which can first based at least one target application role Confidence breath creates a simulation role in first virtual scene, and simulation role is for simulating in the second virtual scene Target application role position.The server is also based on shape, direction and the life of at least one target application role Power relevant information is ordered, the simulation role that a feature matches with target application role is reconstructed in the first virtual scene, The server is according to simulation role in the position of the first virtual scene and at least one scenario objects in the first virtual scene Position carries out malposition identification to target application role.
In alternatively possible embodiment, which can also be directly based upon the first position of target application role Information carries out anomalous identification for it to target application role.Wherein, which can be based on institute in first virtual scene There is the position of scenario objects, malposition identification is carried out to target application role.It is answered alternatively, the server can choose the target With the scenario objects around role, malposition identification is carried out to target application role.The server is according to the target application angle The first location information of color obtains the third position of at least one scenario objects within the scope of the target scene of target application role Confidence breath, according to the third place information of at least one scenario objects within the scope of the target scene with target application role's First location information carries out malposition identification to target application role.The target scene may range from apart from the target It is less than the range of threshold distance using role.Certainly, which can be based on needing to be configured, for example, the mesh Mark may range from the round scene areas apart from target application role less than 30 meters.
In a kind of possible embodiment, when the server also determines shape of the scenario objects in first virtual scene When shape, direction and material information, which can also be according to the third place information, shape, direction and the material of the scenario objects The first location information of matter information and target application role carries out malposition identification to target application role, when the target It is taken up space when overlapping using the position and the scenario objects of role, server determination is by the target application role goal Using the malposition of role.When the position of target application role takes up space not be overlapped with the scenario objects, determining will The target application role goal is normal using the position of role.
In the embodiment of the present invention, the physical engine component of the server has identification function, can in the physical engine component There is target identification strategy with configured in advance, which can be based on the target identification strategy, carry out position to target application role Anomalous identification is set, to realize the identification function.The process can be with are as follows: the server is according at least one target application role's First location information and the third place information of at least one scenario objects in first virtual scene first are set based on this The target identification strategy of standby physical engine component carries out malposition identification at least one target application role.Wherein, The target identification strategy can include but is not limited to: Target Recognition Algorithms, recognition cycle etc..Then the server is according to target application The first location information of role and the third place information of scenario objects, according to the recognition cycle, by the Target Recognition Algorithms, Malposition identification is carried out to target application role.
Wherein, which may include collision inquiry recognizer, for example, ray collides search algorithm and covers Lid search algorithm.Ray collision search algorithm, which refers to, knows target application role by defining starting point coordinate and radiation vector Not.Covering (overlap) search algorithm refers to through shape recognition target application role based on scenario objects and scenario objects Between whether mutually cover, to be identified to target application role.By taking ray collides search algorithm as an example, ray is collided Search algorithm, this step can be with are as follows: multiple continuous acquisition times of the server based at least one target application role Multiple first location informations determine the starting point coordinate and radiation vector of target application role;The server is sat according to the starting point The third place information of at least one scenario objects, identifies this at least around mark, the radiation vector and target application role Whether one target application role collides at least one scenario objects around this;As at least one target application role With when at least one scenario objects collides around this, the server determine the position of at least one target application role with The position of any one scenario objects is overlapped.Wherein, which can be based on target application role in continuous time period Multiple first location informations, the corresponding acquisition time of each first location information, which can be the target application Timestamp of the role in the first location information corresponding position.If around in target application role and the first virtual scene Scenario objects collide, it is determined that target application role is Chong Die with the position of the scenario objects, target application role Malposition.Wherein, the location information which can be earliest according to acquisition time in multiple first location information, really The fixed starting point coordinate, and according to the sequence of acquisition time, according to multiple other than the earliest first location information of acquisition time First location information determines the radiation vector.Wherein, which can sit the corresponding position of multiple first location information Mark is connected, and establishes the radiation vector of target application role.
Certainly, which can also define call back function, call the Target Recognition Algorithms, to target application role into Row identification.In a kind of possible scene, server can collide search algorithm based on ray, rising based on target application role Point coordinate and radiation vector, identify whether target application role hinders with the wall on ground, house in virtual scene or other Object is hindered to have collision.Alternatively, the server be also based on covering search algorithm and ray collision inquiry to target application role into Row surrounds identification, that is to say whether identification target application role is surrounded by the barrier of surrounding, or whether has intersection. The process can be with are as follows: the server generates this at least one according to the first location information of at least one target application role The corresponding 3 D stereo object in first virtual scene of target application role;The server is according to the 3 D stereo object and is somebody's turn to do The third place information of at least one scenario objects, identify at least one scenario objects whether with the 3 D stereo object weight It is folded;When at least one scenario objects are Chong Die with the 3 D stereo object, which determines at least one target application The position of role and the position of any one scenario objects are Chong Die.Wherein, the 3 D stereo object is for representing the target application angle Image of the color in the first virtual scene, the 3 D stereo object can be a capsule body, which can be according to capsule Body generating algorithm and the first location information generate capsule body.Certainly, which can also divide target application role It cuts, multiple segmentation sections of target application role is obtained, and search algorithm is collided using ray, based on target application role's Starting point coordinate and radiation vector, judge whether each segmentation section collides with the scenario objects, if collision, it is determined that the segmentation Spatially there is block in section to target application role.It that is to say, if target application role is in the scenario objects Side, target application role cannot see the object of the other side of the scenario objects through the scenario objects, thus to the mesh Mark is identified using the visibility of role.
In a kind of possible embodiment, if the server also obtains the position of the affiliated partner of target application role It sets, that is to say, further include the position of the affiliated partner of target application role in first location information, then the server can be with root According to the first location information of target application role and the third place information of the scenario objects, to the pass of target application role Connection object is identified that, when the position of the affiliated partner is Chong Die with the scene location object, which determines target application Role is malicious objects.In a kind of possible scene, with the affiliated partner for target application role virtual load mounted For tool, when the virtual carrier is Chong Die with the scenario objects, which determines that target application role is malicious objects.
As shown in figure 5, the embodiment of the present invention also has height when facing complex model due to the virtual scene for having rebuild game Accuracy, for example, having complex model among AB two o'clock position in Fig. 5 in visible sex determination, which is a complexity Sphere object, line in Fig. 5 between AB two o'clock right through the bottom circular arc of model on the outside of, but be not passed through the sphere pair As therefore, AB line is that no object stops.It that is to say, sight of the target application role in A point can be normally through simultaneously It reaches at B point position.Since the AB line is right through on the outside of the circular arc of sphere bottom, compares and only pass through threshold value in the related technology The complex object is obscured as rectangular mode, AB line can be then mistaken in the related technology and have passed through inside complex object, and The embodiment of the present invention can more accurately judge that AB line does not pass through sphere object, and judging result more meets actual scene.Cause This, compared to in the prior art, sits especially when scenario objects more complicated including structure in virtual scene due to being based only upon Mark threshold value, which carries out judgement, to be caused accuracy rate lower or even can hardly determine, and the embodiment of the present invention then can accurately be sentenced It is fixed, the accuracy of character location anomalous identification can be greatly improved.As shown in fig. 6, Fig. 6 is the practical boundary of virtual scene in Fig. 5 Face schematic diagram clearer can show the actual scene picture of the virtual scene from Fig. 6.
Further, when the malposition of target application role, which can be to the 4th equipment dispatch order Message so that the 4th equipment to target application role take corresponding punitive measures or the server can also directly to Terminal where target application role sends a notification message, and takes corresponding punitive measures to target application role, for example, right Game account title processing where target application role.As shown in fig. 7, the server can be by target application role institute It sends a notification message in terminal, which can show the notification message, which can be directly by target application role Put line.As shown in figure 8, Fig. 8 is the actual displayed schematic diagram of notification message in Fig. 7, clearer can be shown from Fig. 8 The actual displayed interface of the notification message.
As shown in figure 9, Fig. 9 is the integrated stand composition of the embodiment of the present invention, as shown in figure 9, left side indicates target application The first equipment where first equipment, right side indicate server, and the first equipment of the target application includes game presentation layer, game Engine layers and physical engine layer, the physical engine layer include the storage address of the object data of the scenario objects.This first sets It is standby to be based on the storage address, get location information, shape, direction and/or material information of scenario objects etc., and by position Confidence breath, shape, direction and/or material information are encapsulated as target resource file according to object format model, by the target resource File is sent in server, which is based on the target resource file, and the first virtual field is created in physical engine component Scape, for simulating the target application when running in first equipment, second of target application shown by the first equipment is virtual Scene.After the server creates first virtual scene, so that it may to each mesh of target application in first virtual scene Mark using role carry out malposition identification, for example, identification target application role whether escape ground, whether surrounded by barrier, The visibility of target application role is identified etc..
Figure 10 is the flow chart that server carries out object's position anomalous identification, based on described in above-mentioned steps 201-204 Technical process, the embodiment of the present invention are introduced the object's position anomalous identification process by taking process shown in Fig. 10 as an example, As shown in Figure 10, which is based on physical engine component, initializes physics scene, and parses and appear on the scene from target resource file Location information, shape, direction and material information of scape object etc., and as unit of scenario objects, it will be used to indicate each scene Location information, shape, the vertex in direction and the serial number of object, are read into memory, in memory, according to vertex and serial number, with Triangle gridding is that unit creates each scenario objects, and each scenario objects are loaded into the initialization physics scene of physx, Obtain the first virtual scene.Then, position coordinates of the server based on target application role, in first virtual scene, Malposition identification is carried out to target application role, for example, carrying out ray collision inquiry identification or covering inquiry identification etc..
In the embodiment of the present invention, which can simulate the target application in the first equipment by the first virtual scene Second virtual scene of upper display, and according to first location information of the target application role in the second virtual scene and this At least one scenario objects in one virtual scene, determine the position recognition result of at least one target application role, due to base In each scenario objects, carrying out malposition identification to target application role in the first virtual scene therefore can be accurate Identify whether the position of target application role is abnormal, substantially increases the standard of character location anomalous identification based on scenario objects True property.
Figure 11 is a kind of flow chart using character location abnormality recognition method provided in an embodiment of the present invention, and this method can On the first device with application, which can be terminal, referring to Figure 11, this method comprises:
1101, physical engine component of the terminal based on target application obtains at least one scenario objects of the target application Second location information.
Wherein, which is used to indicate second that scenario objects are shown on the first device in the target application Position in virtual scene, the physical engine component are used to indicate the storage address of the second location information.The terminal is according to this Storage address indicated by the physical engine component of target application obtains this at least from the corresponding memory space of the storage address The second location information of one scenario objects.
Wherein, which can be obtained based on the acquisition request of server, which can be with are as follows: when the terminal receives To the second equipment acquisition request when, terminal storage address according to indicated by the physical engine component of the target application, from The contextual data of second virtual scene is obtained in the storage address, which is used to indicate second virtual scene and includes At least one scenario objects;The terminal extracts the second location information of at least one scenario objects from the contextual data. The acquisition request is used for the second location information of request at least one scenario objects.
In a kind of possible embodiment, which further includes shape, direction and/or the material letter of scenario objects Breath etc..The terminal can also extract the second location information, shape, direction of at least one scenario objects from the contextual data And/or material information.
Wherein, data acquisition application can be installed in the terminal, the terminal can based on the object type of scenario objects, The contextual data of the scenario objects of the acquisition target type of selectivity.When receiving the acquisition request of server, the terminal is aobvious Show data selection interface;The terminal according in object type options multiple in the data selection interface be selected target type, From target scene data in second virtual scene are obtained in the storage address, which is used to indicate second void The scenario objects of target type in quasi- scene.Wherein, multiple object type option is respectively used to indicate multiple object types. Multiple object types may include stationary body and dynamic object, wherein and stationary body refers to object static in virtual scene, Such as house, trees, stone etc, dynamic object may include the object of virtual scene movement, such as car, people etc.. Wherein, it may include the attribute information of scenario objects in the contextual data, include the class of instruction scenario objects in the attribute information Type information, it is stationary body or dynamic object that the type information, which is used to indicate the object type,.The terminal can be according to the attribute Type information and selected target type in information, determine the scene pair of target type from multiple scenario objects As.The contextual data can be shape (shape) data.
In a kind of possible embodiment, which can also be according to the acquisition modes for determining the scenario objects, this is obtained Taking mode may include that full dose obtains and divide acquisition.Wherein, full dose acquisition, which refers to, disposably obtains second virtual scene The contextual data for the whole scenario objects for inside including.Segmentation acquisition, which refers to, is first split second virtual scene, divides more The secondary scenario objects for obtaining scenario objects in the second virtual scene.For example, second virtual scene can be divided by the terminal Multiple sub- virtual scenes obtain the scenario objects for including in a sub- virtual scene as unit of each sub- virtual scene every time Location information, direction, shape and/or material information etc..
In a kind of possible scene, virtual scene lesser for data volume, the terminal be can choose derived from full dose Mode obtains contextual data, for example, the corresponding virtual scene of small map;Virtual scene biggish for data volume, which can Contextual data is obtained in such a way that selection segmentation is derived, for example, the corresponding virtual scene of big map.
It should be noted that the memory headroom as shared by small map is smaller, terminal can be by the contextual data one of small map Secondary property is all read into memory, and big map is that part is loaded into.Therefore select full dose export when, terminal can directly by Whole contextual datas export in memory comes, then can be directly obtained the scene of the scenario objects in the virtual scene of full scope Data.And for big map, it needs for entire map to be divided into multiple regions, then as unit of each region, exports one by one The object data of the interior scenario objects in each region, dividing number are specified by user oneself, but user is needed to be moved to finger Determine map area and memory is loaded into all with the map for guaranteeing the region.As shown in figure 12, the data selection interface such as Figure 12 Shown, user can choose with the export of obj format, also can choose with the export of Pxbin format, Pxbin format is the second equipment Identifiable another kind format.User, which can choose, obtains stationary body or dynamic object, and selects export mode, wherein straight Full dose export is connect, that is, refers to the location information for all obtaining the scenario objects in all ranges of the second virtual scene, direction, shape And/or material information etc..The data selection interface further includes dividing number and specified export area, if user selects scene point It is exported after cutting, user can also further specify that the quantity that needs are divided, alternatively, the target scene for needing to obtain after selection segmentation Region.The dividing number can be to the quantity of second virtual scene segmentation on horizontal and vertical, for example, dividing number is 3 × 4, then it is that the second virtual scene is divided into 12 sub- virtual scenes.
In alternatively possible embodiment, the terminal can also by virtual scene landform and virtual scene in remove Object other than landform separately obtains.When receiving the first acquisition instruction, which obtains the virtual scene mesorelief number According to first acquisition instruction, which is used to indicate, obtains the corresponding contextual data of the second virtual scene mesorelief;When receiving second When acquisition instruction, which obtains the virtual scene mesorelief data, which is used to indicate acquisition second void Contextual data corresponding to scenario objects in quasi- scene in addition to landform.It should be noted that user can choose whether to lead Landform out, terminal can judge whether object is landform to be filtered according to the data type of physical object.Since terminal can be with The first corresponding contextual data of landform from memory in acquisition virtual scene, then obtain its in the virtual scene in addition to landform The contextual data of his scenario objects when the subsequent data for obtaining other scenario objects, then no longer needs to continue to obtain landform corresponding Contextual data, so as to greatly save data obtaining time.
In a kind of possible embodiment, which can also only obtain one or more scenes pair in the virtual scene As.When the terminal receives third acquisition instruction, terminal scenario objects quantity according to indicated by the third acquisition instruction, It determines dividing number, according to the dividing number, which is divided into multiple self-virtualizing scenes, exported according to segmentation Mode, obtain the corresponding contextual data of destination number scenario objects.Wherein, which is used to indicate acquisition The contextual data of destination number scenario objects in two virtual scenes.Wherein, which can choose segmentation and exports and select Landform is not obtained, the destination number for the scenario objects which can obtain based on needs is set.Certainly, if eventually End needs to obtain single scenario objects, which can be larger, to guarantee that a virtual scene region only includes one Scenario objects, certain specific size size will be configured according to the map, and the embodiment of the present invention is not specifically limited in this embodiment.
As shown in figure 13, Figure 13 is the initial island aircraft got, the terminal dividing number can be arranged it is larger, to protect It demonstrate,proves this and only gets a frame virtual aircraft.As shown in figure 14, Figure 14 is that the actual displayed effect of logical virtual aircraft in Figure 13 is shown It is intended to, the actual displayed effect of the virtual aircraft clearer can be shown from Figure 14.
1102, terminal stores the second location information of at least one scenario objects to mesh according to object format model It marks in resource file.
Wherein, the terminal can according to the object format model, by the second location information of at least one scenario objects, It is encapsulated as the target resource file.Wherein, which can also obtain shape, direction and the/material information of the scenario objects.? In the physical engine component of the target application, the shape of the scenario objects can specifically include form one of the scenario objects or Multiple basic configurations a, for example, scenario objects can be made of a cuboid, a sphere or a square.The shape Shape can specifically be indicated using shape vector.The direction can specifically be indicated using direction matrix.
It, can be in the storage address indicated by the physical engine of the target application in a kind of possible embodiment It is stored with information such as the material of the barycentric coodinates of the scenario objects, shape vector, direction matrix and the scenario objects etc..One The possible embodiment of kind exists, which can be obj format model, which is based on composition field The elements such as the point, line, surface of scape object store the scenario objects, and the server is subsequent can be based on point, line, surface element, Scenario objects are reappeared.In this step, which can be based on the barycentric coodinates, shape vector and direction matrix, according to target The barycentric coodinates, shape vector and direction matrix are converted to the coordinate of point, line, surface element by format model, and by the point, Line, surface element coordinate be encapsulated in the target resource file.Wherein, which can determine the scene based on the shape vector The appearance profile of object determines the direction of the scenario objects based on direction matrix, determines the scene pair based on the barycentric coodinates The position of elephant determines that the scenario objects are corresponding in the target resource file according to the appearance profile of the scenario objects, direction Point, line, surface element, while it being based on the position, the corresponding coordinate of point, line, surface element is determined, from position, shape, three, direction Angle accurately determines specific manifestation form of the scenario objects in second virtual scene.
1103, terminal sends the target resource file to the second equipment.
Terminal to server sends the target resource file, which is used to indicate establishes on the second device First virtual scene identifies the position of target application role based on first virtual scene.The server receives should Target resource file, and by the step 201-204 of above-mentioned Method of Get Remote Object embodiment, position is carried out to target application role Anomalous identification.Wherein, which can also send the application identities of the target application to server.
It should be noted that the terminal can obtain the target application based on the physical engine component of the target application The contextual data of virtual scene, and target resource file is sent to server, allow server to be based on the target resource File reappears the virtual scene of the target application in the physical engine component of server, and role is carried out in the virtual scene Malposition identification, ensure that the accuracy of character location anomalous identification.Also, the terminal can also be obtained and be divided by full dose The acquisition modes of acquisition are cut to obtain the contextual data of scenario objects, enable the terminal to the scene number for flexibly obtaining scenario objects According to, moreover, can satisfy the demand of any contextual data for needing one or more scenario objects, can be suitable for it is a variety of not The scene that same contextual data obtains, substantially increases flexibility and the practicability of file acquisition process.
In the embodiment of the present invention, which can obtain the target application based on the physical engine component of target application The second location information of at least one scenario objects, and it is based on object format model, target resource file is sent to server. Since the Raw scene data of the scenario objects can be directly obtained based on physical engine component, to improve file acquisition Accuracy and obtain efficiency.Also, the terminal is also based on object format model and obtains target resource file, so that service Device is available to improve the efficiency of reading data to the file that can be identified, further improves using character location exception The efficiency of identification.
Figure 15 is a kind of block diagram using character location identification device provided in an embodiment of the present invention.The device can be applied On the second device, which can be server, and referring to Figure 15, which includes: to obtain module 1501 and determining mould Block 1502.
Module 1501 is obtained, for obtaining the first virtual scene of target application, first virtual scene is for simulating this The second virtual scene that target application is shown on the first device;
The acquisition module 1501 is also used to obtain first of at least one target application role in second virtual scene Location information;
Determining module 1502, for according to the first location information of at least one target application role and this is first virtual At least one scenario objects in scene, determine the position recognition result of at least one target application role, position identification knot Whether the position that fruit is used to indicate at least one target application role is abnormal.
Optionally, the acquisition module 1501 is also used to the physical engine component based on the target application, obtains second void The second location information of at least one scenario objects, the physical engine component are used to indicate at least one scene pair in quasi- scene The storage address of the second location information of elephant;According to the second location information of at least one scenario objects, creates the target and answer First virtual scene.
Optionally, the acquisition module 1501, is also used to the storage according to indicated by the physical engine component of the target application Address obtains target resource file from the first equipment for installing the target application;According to object format model, provided from the target The second location information of at least one scenario objects is parsed in source file.
Optionally, it states in the shape, direction or material information that target resource file further includes at least one scenario objects At least one of.
Optionally, the acquisition module 1501 is also used to create destination virtual in the physical engine component of the second equipment empty Between;According to the second location information of at least one scenario objects, which is added to the destination virtual In space, the first virtual scene of the target application is obtained.
Optionally, the acquisition module 1501 is also used in the destination virtual space, according at least one scenario objects Multiple vertex is attached according to the sequence of line indicated by each vertex correspondence serial number, obtains by corresponding multiple vertex At least one scenario objects, multiple vertex are used to indicate scenario objects in the position of first virtual scene, shape and side To;The material information of at least one scenario objects is set in the physical engine component.
Optionally, the acquisition module 1501 is also used to when first virtual scene and second virtual scene are of different sizes When, according to first virtual scene relative to the second virtual scene zoom factor, to the second of at least one scenario objects Location information zooms in and out;Based on the second location information after scaling, which is added to target void In quasi- space, first virtual scene is obtained.
Optionally, the determining module 1502 is also used to determine at least one scenario objects in first virtual scene The third place information;According to the first location information of at least one target application role and at least one scenario objects the Three location informations are based on target identification strategy, carry out malposition identification at least one target application role;When this at least When the position of one target application role and the Chong Die position of any one scenario objects, at least one target application role is determined For malicious objects.
Optionally, the determining module 1502 is also used to multiple continuous acquisitions based at least one target application role Multiple first location informations of time determine the starting point coordinate and radiation vector of target application role;According to the starting point coordinate, The third place information of at least one scenario objects around the radiation vector and target application role identifies this at least one Whether target application role collides at least one scenario objects around this;As at least one target application role and it is somebody's turn to do When at least one scenario objects of surrounding collide, position and any one scene pair of at least one target application role are determined The position of elephant is overlapped.
Optionally, the determining module 1502, is also used to the first location information according at least one target application role, Generate the corresponding 3 D stereo object in first virtual scene of at least one target application role;According to the 3 D stereo pair As the third place information at least one scenario objects, identify at least one scenario objects whether with the 3 D stereo pair As overlapping;When at least one scenario objects are Chong Die with the 3 D stereo object, at least one target application role is determined Position it is Chong Die with the position of any one scenario objects.
Optionally, the determining module 1502 is also used to determine at least one scenario objects in first virtual scene The third place information;From the first location information of at least one target application role, extract target application role's The location information of affiliated partner;Believed according to the third place of the location information of the affiliated partner and at least one scenario objects Breath carries out malposition identification to the affiliated partner, when the position of the affiliated partner and the position of any one scenario objects are Chong Die When, determine the malposition of at least one target application role.
Optionally, the determining module 1502, is also used to the first location information according at least one target application role With first location information of at least one scenario objects in first virtual scene, the physical engine based on first equipment The identification function of component carries out malposition identification, the physical engine of first equipment at least one target application role Target identification strategy is configured in component.
Optionally, the acquisition module 1501 is also used to receive at least one target application role of third equipment transmission First location information, which is that at least one target application role corresponds to terminal or the mesh where user Mark the background server of application;The historical behavior record for receiving at least one target application role of the 4th equipment transmission, from The first location information of at least one target application role is obtained in historical behavior record, which records for referring to Show at least one target application role in the historical behavior of second virtual scene, the 4th equipment is after the target application Platform server.
In the embodiment of the present invention, which can simulate the target application in the first equipment by the first virtual scene Second virtual scene of upper display, and according to first location information of the target application role in the second virtual scene and this At least one scenario objects in one virtual scene, determine the position recognition result of at least one target application role, due to base In each scenario objects, carrying out malposition identification to target application role in the first virtual scene therefore can be accurate Identify whether the position of target application role is abnormal, substantially increases using character location anomalous identification based on scenario objects Accuracy.
Figure 16 is a kind of block diagram using character location anomalous identification device provided in an embodiment of the present invention.The device can be with Using on the first device, which can be terminal, and referring to Figure 16, which includes: to obtain module 1601, storage mould Block 1602 and sending module 1603.
It obtains module 1601 and obtains at least one of the target application for the physical engine component based on target application The second location information of scenario objects, the second location information be used to indicate scenario objects the target application on the first device Position in second virtual scene of display, the physical engine component are used to indicate the storage address of the second location information;
Memory module 1602 is used for according to object format model, by the second location information of at least one scenario objects It stores into target resource file;
Sending module 1603, for sending the target resource file to the second equipment, which is used to indicate The first virtual scene is established on the second device, and the position of target application role is identified based on first virtual scene.
Optionally, the acquisition module 1601 is also used to be answered when receiving the acquisition instruction of the second equipment according to the target Storage address indicated by physical engine component obtains the contextual data of second virtual scene from the storage address, The contextual data is used to indicate at least one scenario objects that second virtual scene includes;Extracting from the contextual data should be to The second location information of few scenario objects.
Optionally, acquisition module 1601, if being also used to the contextual data further includes at least one scenario objects Shape, direction and material information, extracted from the contextual data second location informations of at least one scenario objects, shape, Direction and material information.
Optionally, the acquisition module 1601, is also used to when receiving acquisition instruction, shows data selection interface;According to The target type being selected in multiple object type options in the data selection interface obtains second void from the storage address Target scene data in quasi- scene, the target scene data are used to indicate the scene pair of the target type in second virtual scene As.
In the embodiment of the present invention, which can obtain the target application based on the physical engine component of target application The second location information of at least one scenario objects, and it is based on object format model, target resource file is sent to server. Since the Raw scene data of the scenario objects can be directly obtained based on physical engine component, to improve file acquisition Accuracy and obtain efficiency.Also, the terminal is also based on object format model and obtains target resource file, so that service Device is available to improve the efficiency of reading data to the file that can be identified, further improves using character location exception The efficiency of identification.
All the above alternatives can form the alternative embodiment of the disclosure, herein no longer using any combination It repeats one by one.
It should be understood that application character location anomalous identification device provided by the above embodiment is different in application character location When common sense is other, only the example of the division of the above functional modules, in practical application, can according to need and will be above-mentioned Function distribution is completed by different functional modules, i.e., the internal structure of equipment is divided into different functional modules, with complete with The all or part of function of upper description.In addition, application character location anomalous identification device provided by the above embodiment and application Character location abnormality recognition method EXAMPLE Example belongs to same design, and specific implementation process is detailed in embodiment of the method, this In repeat no more.
Figure 17 is a kind of structural schematic diagram of terminal provided in an embodiment of the present invention.The terminal 1700 may is that intelligent hand (Moving Picture Experts Group Audio Layer III, dynamic image are special for machine, tablet computer, MP3 player Family's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image Expert's compression standard audio level 4) player, laptop or desktop computer.Terminal 1700 is also possible to referred to as user and sets Other titles such as standby, portable terminal, laptop terminal, terminal console.
In general, terminal 1700 includes: processor 1701 and memory 1702.
Processor 1701 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place Reason device 1701 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 1701 also may include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.? In some embodiments, processor 1701 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1701 can also be wrapped AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning Calculating operation.
Memory 1702 may include one or more computer readable storage mediums, which can To be non-transient.Memory 1702 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1702 can Storage medium is read for storing at least one instruction, at least one instruction performed by processor 1701 for realizing this Shen Please in embodiment of the method provide apply character location abnormality recognition method.
In some embodiments, terminal 1700 is also optional includes: peripheral device interface 1703 and at least one periphery are set It is standby.It can be connected by bus or signal wire between processor 1701, memory 1702 and peripheral device interface 1703.It is each outer Peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1703.Specifically, peripheral equipment includes: In radio circuit 1704, touch display screen 1705, camera 1706, voicefrequency circuit 1707, positioning component 1708 and power supply 1709 At least one.
Peripheral device interface 1703 can be used for I/O (Input/Output, input/output) is relevant outside at least one Peripheral equipment is connected to processor 1701 and memory 1702.In some embodiments, processor 1701, memory 1702 and periphery Equipment interface 1703 is integrated on same chip or circuit board;In some other embodiments, processor 1701, memory 1702 and peripheral device interface 1703 in any one or two can be realized on individual chip or circuit board, this implementation Example is not limited this.
Radio circuit 1704 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal. Radio circuit 1704 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 1704 is by telecommunications Number being converted to electromagnetic signal is sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 1704 include: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, volume solution Code chipset, user identity module card etc..Radio circuit 1704 can by least one wireless communication protocol come with it is other Terminal is communicated.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio frequency electrical Road 1704 can also include NFC (Near Field Communication, wireless near field communication) related circuit, the application This is not limited.
Display screen 1705 is for showing UI (User Interface, user interface).The UI may include figure, text, Icon, video and its their any combination.When display screen 1705 is touch display screen, display screen 1705 also there is acquisition to exist The ability of the touch signal on the surface or surface of display screen 1705.The touch signal can be used as control signal and be input to place Reason device 1701 is handled.At this point, display screen 1705 can be also used for providing virtual push button and/or dummy keyboard, it is also referred to as soft to press Button and/or soft keyboard.In some embodiments, display screen 1705 can be one, and the front panel of terminal 1700 is arranged;Another In a little embodiments, display screen 1705 can be at least two, be separately positioned on the different surfaces of terminal 1700 or in foldover design; In still other embodiments, display screen 1705 can be flexible display screen, is arranged on the curved surface of terminal 1700 or folds On face.Even, display screen 1705 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 1705 can be with Using LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) etc. materials preparation.
CCD camera assembly 1706 is for acquiring image or video.Optionally, CCD camera assembly 1706 includes front camera And rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.? In some embodiments, rear camera at least two is that main camera, depth of field camera, wide-angle camera, focal length are taken the photograph respectively As any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide Pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are realized in camera fusion in angle Shooting function.In some embodiments, CCD camera assembly 1706 can also include flash lamp.Flash lamp can be monochromatic temperature flash of light Lamp is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for Light compensation under different-colour.
Voicefrequency circuit 1707 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and It converts sound waves into electric signal and is input to processor 1701 and handled, or be input to radio circuit 1704 to realize that voice is logical Letter.For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 1700 to be multiple. Microphone can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 1701 or radio frequency will to be come from The electric signal of circuit 1704 is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramics loudspeaking Device.When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, can also be incited somebody to action Electric signal is converted to the sound wave that the mankind do not hear to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 1707 may be used also To include earphone jack.
Positioning component 1708 is used for the current geographic position of positioning terminal 1700, to realize navigation or LBS (Location Based Service, location based service).Positioning component 1708 can be the GPS (Global based on the U.S. Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union The positioning component of Galileo system.
Power supply 1709 is used to be powered for the various components in terminal 1700.Power supply 1709 can be alternating current, direct current Electricity, disposable battery or rechargeable battery.When power supply 1709 includes rechargeable battery, which can support wired Charging or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 1700 further includes having one or more sensors 1710.One or more sensing Device 1710 includes but is not limited to: acceleration transducer 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensing Device 1714, optical sensor 1715 and proximity sensor 1716.
Acceleration transducer 1711 can detecte the acceleration in three reference axis of the coordinate system established with terminal 1700 Size.For example, acceleration transducer 1711 can be used for detecting component of the acceleration of gravity in three reference axis.Processor The 1701 acceleration of gravity signals that can be acquired according to acceleration transducer 1711, control touch display screen 1705 with transverse views Or longitudinal view carries out the display of user interface.Acceleration transducer 1711 can be also used for game or the exercise data of user Acquisition.
Gyro sensor 1712 can detecte body direction and the rotational angle of terminal 1700, gyro sensor 1712 Acquisition user can be cooperateed with to act the 3D of terminal 1700 with acceleration transducer 1711.Processor 1701 is according to gyro sensors The data that device 1712 acquires, following function may be implemented: action induction (for example changing UI according to the tilt operation of user) is clapped Image stabilization, game control and inertial navigation when taking the photograph.
The lower layer of side frame and/or touch display screen 1705 in terminal 1700 can be set in pressure sensor 1713.When When the side frame of terminal 1700 is arranged in pressure sensor 1713, user can detecte to the gripping signal of terminal 1700, by Reason device 1701 carries out right-hand man's identification or prompt operation according to the gripping signal that pressure sensor 1713 acquires.Work as pressure sensor 1713 when being arranged in the lower layer of touch display screen 1705, is grasped by processor 1701 according to pressure of the user to touch display screen 1705 Make, realization controls the operability control on the interface UI.Operability control include button control, scroll bar control, At least one of icon control, menu control.
Fingerprint sensor 1714 is used to acquire the fingerprint of user, is collected by processor 1701 according to fingerprint sensor 1714 Fingerprint recognition user identity, alternatively, by fingerprint sensor 1714 according to the identity of collected fingerprint recognition user.Knowing Not Chu the identity of user when being trusted identity, authorize the user to execute relevant sensitive operation by processor 1701, which grasps Make to include solving lock screen, checking encryption information, downloading software, payment and change setting etc..Fingerprint sensor 1714 can be set Set the front, the back side or side of terminal 1700.When being provided with physical button or manufacturer Logo in terminal 1700, fingerprint sensor 1714 can integrate with physical button or manufacturer Logo.
Optical sensor 1715 is for acquiring ambient light intensity.In one embodiment, processor 1701 can be according to light The ambient light intensity that sensor 1715 acquires is learned, the display brightness of touch display screen 1705 is controlled.Specifically, work as ambient light intensity When higher, the display brightness of touch display screen 1705 is turned up;When ambient light intensity is lower, the aobvious of touch display screen 1705 is turned down Show brightness.In another embodiment, the ambient light intensity that processor 1701 can also be acquired according to optical sensor 1715, is moved The acquisition parameters of state adjustment CCD camera assembly 1706.
Proximity sensor 1716, also referred to as range sensor are generally arranged at the front panel of terminal 1700.Proximity sensor 1716 for acquiring the distance between the front of user Yu terminal 1700.In one embodiment, when proximity sensor 1716 is examined When measuring the distance between the front of user and terminal 1700 and gradually becoming smaller, by processor 1701 control touch display screen 1705 from Bright screen state is switched to breath screen state;When proximity sensor 1716 detect the distance between front of user and terminal 1700 by When gradual change is big, touch display screen 1705 is controlled by processor 1701 and is switched to bright screen state from breath screen state.
It, can be with it will be understood by those skilled in the art that the restriction of the not structure paired terminal 1700 of structure shown in Figure 17 Including than illustrating more or fewer components, perhaps combining certain components or being arranged using different components.
Figure 18 is a kind of structural schematic diagram of server provided in an embodiment of the present invention, the server 1800 can because of configuration or Performance is different and generates bigger difference, may include one or more processors (central processing Units, CPU) 1801 and one or more memory 1802, wherein at least one is stored in the memory 1802 Instruction, at least one instruction are loaded by the processor 1801 and are executed the application to realize above-mentioned each embodiment of the method offer Character location abnormality recognition method.Certainly, which can also have wired or wireless network interface, keyboard and input defeated The components such as outgoing interface, to carry out input and output, which can also include other components for realizing functions of the equipments, This is not repeated them here.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, the memory for example including instruction, Above-metioned instruction can be executed by the processor in terminal or server to be known extremely with the application character location completed in above-described embodiment Other method.For example, the computer readable storage medium can be ROM (Read-Only Memory, read-only memory), RAM (random access memory, random access memory), CD-ROM (Compact Disc Read-Only Memory, only Read CD), tape, floppy disk and optical data storage devices etc..
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware It completes, relevant hardware can also be instructed to complete by program, the program being somebody's turn to do can store computer-readable deposits in a kind of In storage media, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (15)

1. a kind of application character location abnormality recognition method, which is characterized in that the described method includes:
The first virtual scene of target application is obtained, first virtual scene is for simulating the target application in the first equipment Second virtual scene of upper display;
Obtain first location information of at least one target application role in second virtual scene;
According at least one field in the first location information of at least one target application role and first virtual scene Scape object determines that the position recognition result of at least one target application role, the position recognition result are used to indicate institute Whether the position for stating at least one target application role is abnormal.
2. the method according to claim 1, wherein first virtual scene for obtaining target application includes:
Physical engine component based on the target application obtains the of at least one scenario objects in second virtual scene Two location informations, the physical engine component is with being used to indicate the storage of the second location information of at least one scenario objects Location;
According to the second location information of at least one scenario objects, the first virtual scene of the target application is created.
3. according to the method described in claim 2, it is characterized in that, the physical engine component based on the target application, The second location information for obtaining at least one scenario objects in second virtual scene includes:
According to storage address indicated by the physical engine component of the target application, set from install the target application first Standby middle acquisition target resource file;
According to object format model, the second position of at least one scenario objects is parsed from the target resource file Information.
4. according to the method described in claim 3, it is characterized in that, the target resource file further includes at least one described field At least one of in the shape of scape object, direction or material information.
5. according to the method described in claim 2, it is characterized in that, described according at least one field in second virtual scene The second location information of scape object, the first virtual scene for creating the target application include:
Destination virtual space is created in the physical engine component of the second equipment;
According to the second location information of at least one scenario objects, at least one described scenario objects are added to the mesh It marks in Virtual Space, obtains the first virtual scene of the target application.
6. the method according to claim 1, wherein the of at least one target application role according to At least one scenario objects in one location information and first virtual scene determine at least one target application role's Position recognition result includes:
Determine the third place information of at least one scenario objects in first virtual scene;
According to the third position of the first location information of at least one target application role and at least one scenario objects Confidence breath, is based on target identification strategy, carries out malposition identification at least one target application role;
When the position of at least one target application role is Chong Die with the position of any one scenario objects, determination is described at least The malposition of one target application role.
7. according to the method described in claim 6, it is characterized in that, the of at least one target application role according to The location information of one location information and at least one described scenario objects in first virtual scene is based on target identification plan Slightly, carrying out malposition identification at least one target application role includes:
According to the first location information of at least one target application role and at least one described scenario objects described First location information in one virtual scene, the identification function of the physical engine component based on first equipment, to it is described extremely A few target application role carries out malposition identification, is configured with target identification in the physical engine component of first equipment Strategy.
8. the method according to claim 1, wherein acquisition at least one target application role is described First location information in two virtual scenes includes at least one of the following:
The first location information at least one target application role that third equipment is sent is received, the third equipment is institute State the background server of terminal or the target application that at least one target application role corresponds to where user;
The historical behavior record for receiving at least one target application role of the 4th equipment transmission, remembers from the historical behavior Obtain the first location information of at least one target application role in record, the historical behavior record be used to indicate it is described extremely A few target application role is in the historical behavior of second virtual scene, and the 4th equipment is after the target application Platform server.
9. a kind of application character location abnormality recognition method, which is characterized in that the described method includes:
Physical engine component based on target application obtains the second confidence of at least one scenario objects of the target application Breath, the second location information are used to indicate the second virtual scene that scenario objects are shown in the target application in the first equipment In position, the physical engine component is used to indicate the storage address of the second location information;
According to object format model, the second location information of at least one scenario objects is stored to target resource file In;
The target resource file is sent to the second equipment, and the target resource file is used to indicate establishes the on the second device One virtual scene identifies the position of target application role based on first virtual scene.
10. according to the method described in claim 9, it is characterized in that, the physical engine component based on target application, obtains At least one scenario objects second location information of the target application includes:
When receiving the acquisition instruction of the second equipment, according to storage indicated by the physical engine component of the target application Location, obtains the contextual data of second virtual scene from the storage address, and the contextual data is used to indicate described At least one scenario objects that two virtual scenes include;
The second location information of at least one scenario objects is extracted from the contextual data.
11. according to the method described in claim 10, it is characterized in that, described when receiving the acquisition instruction, according to institute Storage address indicated by the physical engine component of target application is stated, second virtual scene is obtained from the storage address Contextual data include:
When receiving acquisition instruction, data selection interface is shown;
According to the target type being selected in object type options multiple in the data selection interface, from the storage address Target scene data in second virtual scene are obtained, the target scene data are used to indicate in second virtual scene The scenario objects of target type.
12. a kind of application character location anomalous identification device, which is characterized in that described device includes:
Module is obtained, for obtaining the first virtual scene of target application, first virtual scene is for simulating the target Using the second virtual scene shown on the first device;
The acquisition module is also used to obtain first position of at least one target application role in second virtual scene Information;
Determining module, for the first location information of at least one target application role and first virtual scene according to In at least one scenario objects, determine the position recognition result of at least one target application role, the position identification knot Whether the position that fruit is used to indicate at least one target application role is abnormal.
13. a kind of application character location anomalous identification device, which is characterized in that described device includes:
It obtains module and obtains at least one scene pair of the target application for the physical engine component based on target application The second location information of elephant, the second location information are used to indicate scenario objects and show in the target application in the first equipment The second virtual scene in position, the physical engine component is used to indicate the storage address of the second location information;
Memory module, for according to object format model, by the second location information of at least one scenario objects store to In target resource file;
Sending module, for sending the target resource file to the second equipment, the target resource file is used to indicate the The first virtual scene is established in two equipment, and the position of target application role is identified based on first virtual scene.
14. a kind of electronic equipment, which is characterized in that the electronic equipment includes that one or more processors and one or more are deposited Reservoir is stored at least one instruction in one or more of memories, and at least one instruction is by one or more A processor is loaded and is executed to realize that claim 1 to the described in any item application character locations of claim 11 such as is known extremely Operation performed by other method.
15. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, institute in the storage medium At least one instruction is stated to be loaded by processor and executed to realize such as claim 1 to the described in any item applications of claim 11 Operation performed by character location abnormality recognition method.
CN201910199228.0A 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium Active CN109939442B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910199228.0A CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910199228.0A CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109939442A true CN109939442A (en) 2019-06-28
CN109939442B CN109939442B (en) 2022-09-09

Family

ID=67010051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910199228.0A Active CN109939442B (en) 2019-03-15 2019-03-15 Application role position abnormity identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109939442B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680646A (en) * 2020-06-11 2020-09-18 北京市商汤科技开发有限公司 Motion detection method and device, electronic device and storage medium
CN112717404A (en) * 2021-01-25 2021-04-30 腾讯科技(深圳)有限公司 Virtual object movement processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713201A (en) * 2004-06-23 2005-12-28 世嘉股份有限公司 Online game irregularity detection method
US20100138455A1 (en) * 2008-12-02 2010-06-03 International Business Machines Corporation System and method for detecting inappropriate content in virtual worlds
CN101788909A (en) * 2010-01-28 2010-07-28 北京天空堂科技有限公司 Solving method and device of network game server end walking system
US8948501B1 (en) * 2009-12-22 2015-02-03 Hrl Laboratories, Llc Three-dimensional (3D) object detection and multi-agent behavior recognition using 3D motion data
CN104932872A (en) * 2014-03-18 2015-09-23 腾讯科技(深圳)有限公司 Message processing method and server
CN106955493A (en) * 2017-03-30 2017-07-18 北京乐动卓越科技有限公司 The method of calibration that role moves in a kind of 3D online games
CN108629180A (en) * 2018-03-29 2018-10-09 腾讯科技(深圳)有限公司 The determination method and apparatus of abnormal operation, storage medium, electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1713201A (en) * 2004-06-23 2005-12-28 世嘉股份有限公司 Online game irregularity detection method
US20100138455A1 (en) * 2008-12-02 2010-06-03 International Business Machines Corporation System and method for detecting inappropriate content in virtual worlds
US8948501B1 (en) * 2009-12-22 2015-02-03 Hrl Laboratories, Llc Three-dimensional (3D) object detection and multi-agent behavior recognition using 3D motion data
CN101788909A (en) * 2010-01-28 2010-07-28 北京天空堂科技有限公司 Solving method and device of network game server end walking system
CN104932872A (en) * 2014-03-18 2015-09-23 腾讯科技(深圳)有限公司 Message processing method and server
CN106955493A (en) * 2017-03-30 2017-07-18 北京乐动卓越科技有限公司 The method of calibration that role moves in a kind of 3D online games
CN108629180A (en) * 2018-03-29 2018-10-09 腾讯科技(深圳)有限公司 The determination method and apparatus of abnormal operation, storage medium, electronic device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111680646A (en) * 2020-06-11 2020-09-18 北京市商汤科技开发有限公司 Motion detection method and device, electronic device and storage medium
CN111680646B (en) * 2020-06-11 2023-09-22 北京市商汤科技开发有限公司 Action detection method and device, electronic equipment and storage medium
CN112717404A (en) * 2021-01-25 2021-04-30 腾讯科技(深圳)有限公司 Virtual object movement processing method and device, electronic equipment and storage medium
CN112717404B (en) * 2021-01-25 2022-11-29 腾讯科技(深圳)有限公司 Virtual object movement processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109939442B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
KR102500722B1 (en) Virtual prop transfer method and device, electronic device and computer storage medium
CN108717733B (en) View angle switch method, equipment and the storage medium of virtual environment
CN105555373B (en) Augmented reality equipment, methods and procedures
CN105378801B (en) Hologram snapshot grid
CN108671543A (en) Labelled element display methods, computer equipment and storage medium in virtual scene
CN108465240A (en) Mark point position display method, device, terminal and computer readable storage medium
CN109200582A (en) The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN110917616B (en) Orientation prompting method, device, equipment and storage medium in virtual scene
CN109091869A (en) Method of controlling operation, device, computer equipment and the storage medium of virtual objects
CN110276840A (en) Control method, device, equipment and the storage medium of more virtual roles
CN108671545A (en) Control the method, apparatus and storage medium of virtual objects and virtual scene interaction
CN110243386A (en) Navigation information display methods, device, terminal and storage medium
CN110148178A (en) Camera localization method, device, terminal and storage medium
CN109634413B (en) Method, device and storage medium for observing virtual environment
CN110061900A (en) Message display method, device, terminal and computer readable storage medium
CN109615686A (en) Potential determination method, apparatus, equipment and the storage medium visually gathered
CN109646944A (en) Control information processing method, device, electronic equipment and storage medium
CN108694073A (en) Control method, device, equipment and the storage medium of virtual scene
CN111273780B (en) Animation playing method, device and equipment based on virtual environment and storage medium
CN108744510A (en) Virtual objects display methods, device and storage medium
CN108536295A (en) Object control method, apparatus in virtual scene and computer equipment
CN110102052A (en) Virtual resource put-on method, device, electronic device and storage medium
WO2021164315A1 (en) Hotspot map display method and apparatus, and computer device and readable storage medium
CN110393916A (en) Method, apparatus, equipment and the storage medium of visual angle rotation
CN109939442A (en) Using character location abnormality recognition method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant