CN112121414B - Tracking method and device in virtual scene, electronic equipment and storage medium - Google Patents

Tracking method and device in virtual scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN112121414B
CN112121414B CN202011057660.5A CN202011057660A CN112121414B CN 112121414 B CN112121414 B CN 112121414B CN 202011057660 A CN202011057660 A CN 202011057660A CN 112121414 B CN112121414 B CN 112121414B
Authority
CN
China
Prior art keywords
tracking
virtual object
prop
virtual
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011057660.5A
Other languages
Chinese (zh)
Other versions
CN112121414A (en
Inventor
姚丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011057660.5A priority Critical patent/CN112121414B/en
Publication of CN112121414A publication Critical patent/CN112121414A/en
Application granted granted Critical
Publication of CN112121414B publication Critical patent/CN112121414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/847Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal

Abstract

The application provides a tracking method, a tracking device, electronic equipment and a computer-readable storage medium in a virtual scene; the method comprises the following steps: presenting a virtual scene and a throwing operation control for tracking props in a human-computer interaction interface; in response to a triggering operation for the throwing operation control, controlling a first virtual object to throw the tracking prop in the virtual scene; when the tracked prop reaches the target position and releases the special effect, detecting at least one second virtual object included in a special effect release range; and acquiring the real-time position of the at least one second virtual object in the virtual scene, and presenting the real-time position of the at least one second virtual object in the tracking interface of the first virtual object. By the method and the device, the effect of flexibly tracking the virtual object according to the requirement can be realized, and the computing resources are effectively saved.

Description

Tracking method and device in virtual scene, electronic equipment and storage medium
Technical Field
The present disclosure relates to human-computer interaction technologies, and in particular, to a tracking method and apparatus in a virtual scene, an electronic device, and a computer-readable storage medium.
Background
With the increasing maturity of display technologies based on graphic processing hardware, the sensing environment and channels for acquiring information are expanded, especially the display technology of virtual scenes, diversified interaction between virtual objects controlled by users or Artificial Intelligence (AI) can be realized according to actual requirements, and various typical application scenes are provided, for example, in virtual scenes such as tactical competitive games, the fighting process between virtual objects can be simulated.
In the solutions provided by the related arts, a unified position tracking function for virtual objects in a virtual scene is provided, that is, the positions of other virtual objects in the virtual scene are displayed in a small map of any one virtual object, which cannot adapt to personalized position tracking requirements for different objects in the virtual scene, and since the position of each virtual object in the virtual scene needs to be continuously calculated, the calculation resources of the electronic device are greatly consumed, and especially for a large virtual scene, frequent position calculation may affect the real-time performance of the virtual scene in response to human-computer interaction.
Disclosure of Invention
The embodiment of the application provides a tracking method and device in a virtual scene, electronic equipment and a computer-readable storage medium, which can realize a flexible way of acquiring the position of a virtual object.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a tracking method in a virtual scene, which comprises the following steps:
presenting a virtual scene and a throwing operation control for tracking props in a human-computer interaction interface;
in response to a triggering operation for the throwing operation control, controlling a first virtual object to throw the tracking prop in the virtual scene;
when the tracked prop reaches the target position and releases the special effect, detecting at least one second virtual object included in a special effect release range;
and acquiring the real-time position of the at least one second virtual object in the virtual scene, and presenting the real-time position of the at least one second virtual object in the tracking interface of the first virtual object.
The embodiment of the present application provides a tracking apparatus in a virtual scene, including:
the scene presenting module is used for presenting a virtual scene in a human-computer interaction interface and tracking a throwing operation control of a prop;
a throwing module, configured to control a first virtual object to throw the tracking prop in the virtual scene in response to a triggering operation for the throwing operation control;
the special effect releasing module is used for detecting at least one second virtual object included in a special effect releasing range when the tracking prop reaches a target position and releases a special effect;
and the position presenting module is used for acquiring the real-time position of the at least one second virtual object in the virtual scene and presenting the real-time position of the at least one second virtual object in the tracking interface of the first virtual object.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the tracking method in the virtual scene provided by the embodiment of the application when the executable instructions stored in the memory are executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium to implement the tracking method in the virtual scene provided by the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
the method has the advantages that the objects included in the special effect release range of the tracked props are identified as the objects needing position tracking in the virtual scene, so that the effect of flexibly tracking the virtual objects according to needs in the virtual scene is achieved, the requirements of tracking different virtual objects individually in the virtual scene are met, compared with the method for calculating the positions of all the virtual objects in the virtual object, the calculation resources are effectively saved, and the real-time performance of the virtual scene for responding to man-machine interaction operation is improved.
Drawings
Fig. 1 is a schematic architecture diagram of a tracking system in a virtual scene according to an embodiment of the present application;
fig. 2A is a schematic structural diagram of a terminal device according to an embodiment of the present application;
FIG. 2B is a schematic diagram of an architecture of a human-computer interaction engine provided in an embodiment of the present application;
fig. 3A is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 3B is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 3C is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 3D is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 3E is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a tactical gear interface provided by an embodiment of the present application;
FIG. 6 is a schematic view of a game interface including a toggle operation control provided by an embodiment of the present application;
FIG. 7 is a schematic view of a gaming interface including a pitch operation control provided by embodiments of the present application;
FIG. 8 is a schematic view of a game interface including a simulated throwing track provided by an embodiment of the present application;
fig. 9 is a schematic diagram of a drawing interface for simulating a throwing track provided by an embodiment of the present application;
FIG. 10 is a schematic view of a configuration parameter interface of a crash box provided by an embodiment of the present application;
FIG. 11 is a schematic view of a game interface including an infection scope provided by embodiments of the present application;
FIG. 12 is a schematic illustration of the infection ranges provided by the examples of the present application;
FIG. 13 is a schematic illustration of a minimap including the location of an infected enemy provided by an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein. In the following description, the term "plurality" referred to means at least two.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Virtual scene: by utilizing the scene which is output by the equipment and is different from the real world, the visual perception of the virtual scene can be formed by naked eyes or the assistance of the equipment, such as a two-dimensional image output by a display screen, and a three-dimensional image output by a stereoscopic display technology such as a stereoscopic projection technology, a virtual reality technology and an augmented reality technology; in addition, various real-world-simulated perceptions such as auditory perception, tactile perception, olfactory perception, motion perception and the like can be formed through various possible hardware. The virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, ocean, etc., the land may include environmental elements such as deserts, cities, etc., and virtual objects may move or perform other operations (e.g., attack operations) in the virtual scene under control of a user or an AI.
2) In response to: for indicating the condition or state on which the performed operation depends, when the condition or state on which the performed operation depends is satisfied, the performed operation or operations may be in real time or may have a set delay; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) A client, an application program running in the terminal device for providing various services, such as a shooting game client, and the like.
4) Virtual object: the image of various people and objects that can interact in the virtual scene, or the movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, an animation character, etc., such as a character, an animal, a plant, an oil drum, a wall, a stone, etc., displayed in a virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
For example, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in a virtual scene match by training, or a Non-user Character (NPC) set in a virtual scene interaction. For example, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. For example, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide or open a parachute to fall in the sky of the virtual scene, run, jump, crawl, bow to move on land, or control a virtual object to swim, float or dive in the sea, or the like, and of course, the user may also control a virtual object to ride a virtual vehicle to move in the virtual scene, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, or the like, which is only exemplified by the above-mentioned scenes, but the present invention is not limited thereto. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) Tracing props: the virtual property that indicates to be used for realizing the function of tracking does not limit to the type of tracking the stage property, can be the virtual stage property of throwing such as tracking grenade, also can be the virtual stage property of shooting such as tracking firearms, tracking bullet or tracking grenade.
The embodiment of the application provides a tracking method and device in a virtual scene, electronic equipment and a computer-readable storage medium, which can expand the acquisition path of the position of a virtual object and improve the experience of human-computer interaction. An exemplary application of the electronic device provided in the embodiment of the present application is described below, and the electronic device provided in the embodiment of the present application may be implemented as various types of terminal devices, and may also be implemented as a server.
Referring to fig. 1, fig. 1 is an architectural diagram of a tracking system 100 in a virtual scene provided in an embodiment of the present application, a terminal device 400 is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two.
In some embodiments, taking an electronic device as a terminal device as an example, the tracking method in a virtual scene provided in the embodiments of the present application may be implemented by the terminal device, and is suitable for some practical application scenes in which the calculation of related data of the virtual scene can be completed by completely depending on the local computing capability of the terminal device 400, for example, a game in a standalone/offline mode completes the output of the virtual scene through the terminal device 400.
When the visual perception of the virtual scene needs to be formed, the terminal device 400 calculates and displays required data through the graphic calculation hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of augmented reality/virtual reality glasses; furthermore, to enrich the perception effect, the terminal device may also form one or more of auditory perception (e.g., by a microphone), tactile perception (e.g., by a vibrator), motion perception, and taste perception by means of different hardware.
As an example, as shown in fig. 1, the terminal device 400 runs a client 410 (e.g. an application program of a standalone game), and outputs a virtual scene 500 during the running process of the client 410, where the virtual scene 500 is an environment for virtual objects (e.g. game characters) to interact with, for example, a plain, a street, a valley, and the like for the virtual objects to fight against; the virtual scene 500 includes a first virtual object 510 and a tracking prop 520, where the first virtual object 510 may be a game character controlled by a user (or called player), that is, the first virtual object 510 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice-operated switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the first virtual object 510 will move to the left in the virtual scene, and may also keep in place to be stationary, jump, and use various functions (such as skills and props), and of course, the first virtual object 510 may also be controlled by an AI; tracking prop 520 may be a battle tool used by first virtual object 510 in virtual scene 500, for example, first virtual object 510 may pick up tracking prop 520 in virtual scene to play game play using the function of tracking prop 520, and may equip first virtual object 510 with tracking prop 520 before game play starts.
For example, when a user controls first virtual object 510 to throw tracking prop 520 through client 410, client 410 presents a process of tracking prop 520 motion in a human-machine interaction interface. When tracking prop 520 reaches the target location, client 410 releases the special effect, and detects at least one second virtual object included in the special effect release range, which is exemplified by a second virtual object 530 in fig. 1, wherein second virtual object 530 is controlled by a real user or an AI. The client 410 then presents the real-time location of the second virtual object 530 in the tracking interface 540 of the first virtual object 510. Here, the client 410 may continue to present the real-time location of the first virtual object 510 itself in the tracking interface 540.
In some embodiments, taking an electronic device as a server as an example, the tracking method in a virtual scene provided in the embodiments of the present application may be cooperatively implemented by the server and a terminal device, and is suitable for completing virtual scene calculation depending on the computing power of the server 200 and outputting an actual application scene of the virtual scene at the terminal device 400.
Taking the visual perception of forming the virtual scene as an example, the server 200 performs calculation of display data related to the virtual scene and sends the calculation to the terminal device 400, the terminal device 400 depends on graphics computing hardware to complete loading, parsing and rendering of the calculation display data, and depends on graphics output hardware to output the virtual scene to form the visual perception.
The terminal device 400 can run a client 410 (e.g. an application program of a network version game) to output the virtual scene 500 in the man-machine interaction interface by connecting with the game server (i.e. the server 200) to perform game interaction with other users or the AI of the server 200. For example, when the user implements a trigger operation on a throwing operation control of the tracking prop 520 on the first virtual object 510 through the client 410, the client 410 sends the received trigger operation to the server 200 through the network 300, and the server 200 sends the motion trajectory of the tracking prop 520 to the client 410 according to a preset throwing logic, so that the client 410 displays a process of tracking the prop 520 to move along the motion trajectory. When determining that tracking prop 520 reaches the target position, server 200 sends an instruction to release the special effect to client 410, and detects at least one second virtual object included in a special effect release range, which is exemplified by second virtual object 530 in fig. 1. The server 200 may then send the real-time location of the second virtual object 530 to the client 410 for rendering by the client 410 in the tracking interface 540 (which may be a minimap). Here, the server 200 may also continuously send the real-time location of the first virtual object 510 itself to the client 410, so that the client 410 performs continuous presentation in the tracking interface 540.
It should be noted that, in fig. 1, the virtual scene 500 is observed from the perspective of the first virtual object 510, which is a first-person perspective, but this does not limit the embodiment of the present application, and in an actual application scene, the virtual scene 500 may also be observed from the perspective of a third person.
In some embodiments, the terminal device 400 may implement the tracking method in the virtual scene provided by the embodiment of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a Native Application (APP), i.e., a program that needs to be installed in an operating system to run, such as a game Application (i.e., the client 410 described above); or may be an applet, i.e. a program that can be run only by downloading it to the browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in. As for the game application, it may be any one of a First-Person Shooting (FPS) game, a Third-Person Shooting (TPS) game, a Multiplayer Online tactical sports (MOBA) game, or a Multiplayer gunfight type live game, which is not limited in this respect.
For example, in a game scenario, tracking prop 520 may be used to simulate a nano-robot or other equipment with tracking functionality. After determining the second virtual object 530 included in the special effect release range of the tracking prop 520, the real-time position of the second virtual object 530 is presented in the tracking interface 540 of the first virtual object 510, so as to simulate the process that the nano robot is attached to the second virtual object 530 and continuously sends real-time position data to the outside (i.e., to the first virtual object 510).
The embodiments of the present application may be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology for unifying series resources such as hardware, software, and network in a wide area network or a local area network to implement data calculation, storage, processing, and sharing. In another sense, the cloud technology is also a general term of a network technology, an information technology, an integration technology, a management platform technology, an application technology and the like based on cloud computing business model application, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources.
In some embodiments, the server 200 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform, for example, the cloud service may be a service of a virtual scene, and is called by the terminal device 400 to send display data related to the virtual scene to the terminal device 400. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a smart television, and the like. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
Taking the electronic device provided in the embodiment of the present application as an example for illustration, it can be understood that, for the case where the electronic device is a server, parts (e.g., a user interface, a presentation module, and an input processing module) in the structure shown in fig. 2A may be default. Referring to fig. 2A, fig. 2A is a schematic structural diagram of a terminal device 400 provided in an embodiment of the present application, where the terminal device 400 shown in fig. 2A includes: at least one processor 410, memory 450, at least one network interface 420, and a user interface 430. The various components in the terminal device 400 are coupled together by a bus system 440. It is understood that the bus system 440 is used to enable communications among the components. The bus system 440 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 440 in fig. 2A.
The Processor 410 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 430 includes one or more output devices 431, including one or more speakers and/or one or more visual displays, that enable the presentation of media content. The user interface 430 also includes one or more input devices 432, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 450 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 450 optionally includes one or more storage devices physically located remote from processor 410.
The memory 450 includes either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 450 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 450 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 451, including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
a network communication module 452 for communicating to other computing devices via one or more (wired or wireless) network interfaces 420, exemplary network interfaces 420 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 453 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 431 (e.g., display screens, speakers, etc.) associated with user interface 430;
an input processing module 454 for detecting one or more user inputs or interactions from one of the one or more input devices 432 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 2A illustrates a tracking apparatus 455 stored in a virtual scene in a memory 450, which may be software in the form of programs and plug-ins, and includes the following software modules: a scene rendering module 4551, a throwing module 4552, a special effects release module 4553 and a position rendering module 4554, which are logical and thus may be arbitrarily combined or further separated according to the functions implemented. The functions of the respective modules will be explained below.
Referring to fig. 2B, fig. 2B is a schematic diagram of a human-machine interaction engine in a tracking device in a virtual scene according to an embodiment of the present application, where the virtual scene is a game virtual scene, the human-machine interaction engine may be a game engine. The game engine is a core component of some edited computer game system or some interactive real-time image application program, these systems provide game designers with various tools required for writing games, and the purpose of the game designer is to make the game program easy and fast without starting from zero, and at the same time, the game engine is also the engine of the game for controlling the operation of the game. The game engine includes, but is not limited to, a rendering engine (i.e., "renderer", including two-dimensional and three-dimensional image engines), a physics engine, a collision detection component, special effects, sound effects, a script engine, computer animation, artificial intelligence, a web engine, and scene management, and at a bottom level, the game engine is a set of codes (instructions) that can be recognized by a machine. A game application may include two major components, namely a game engine and game resources, wherein the game resources include images, sounds, animations, etc., and the game engine calls the game resources in order according to the requirements of the game design (i.e., according to the designed program code).
The tracking method in the virtual scene provided in the embodiment of the present application may be implemented by each module in the tracking device 455 in the virtual scene shown in fig. 2A calling the relevant components of the game engine shown in fig. 2B, and is exemplified below.
For example, the scene presentation module 4551 is configured to invoke a user interface component in a game engine to implement interaction between a user and a game, invoke a model component in the game engine to make a two-dimensional or three-dimensional model, and after the model is made, assign a material chartlet to the model according to different faces through a skeleton animation component, which is equivalent to covering a skeleton with skin, and finally calculate all effects of the model, animation, light shadow, special effect, and the like in real time through a rendering component and display the effects in the human-computer interaction interface, so that a virtual scene and different types of content included in the virtual scene, such as a virtual object, a virtual prop, a building, and the like, can be displayed in the human-computer interaction interface.
The throwing module 4552 is configured to, in response to a trigger operation for the throwing operation control, present a process in which the first virtual object throws the tracked prop in the virtual scene so as to track the prop to move, where a rendering component in the game engine may be invoked, perform real-time image calculation based on the calculated movement trajectory, and display the movement process of the tracked prop in the human-computer interaction interface.
The special effect releasing module 4553 is configured to call a camera component and a collision detection component in the game engine, and determine whether the tracked prop reaches the target position. And when the tracked prop reaches the target position, calling the rendering component to render the special effect so as to display the special effect in the human-computer interaction interface. In addition, the special effect release module 4553 is further configured to detect at least one second virtual object included in the special effect release range, for example, an underlying algorithm component of the game engine may be invoked to determine whether a virtual object other than the first virtual object is located in the special effect release range.
The position presenting module 4554 is configured to invoke the rendering component to render the real-time position of the at least one second virtual object into the tracking interface of the first virtual object. The above examples do not limit the embodiments of the present application, and the calling relationship of each component included in the game engine and each module in the tracking device 455 in the virtual scene to the component in the game engine may be adjusted according to the actual application scene.
The tracking method in the virtual scene provided by the embodiment of the present application will be described in conjunction with exemplary applications and implementations of the electronic device provided by the embodiment of the present application.
Referring to fig. 3A, fig. 3A is a schematic flowchart of a tracking method in a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3A.
In step 101, a virtual scene and a throwing operation control for tracking props are presented in a human-computer interaction interface.
Here, a virtual scene including a plurality of virtual objects is presented in the human-machine interaction interface, and taking the human-machine interaction interface as an interface of the game application as an example, the virtual scene including a plurality of virtual objects (such as game characters) can be presented when the game is played. For the convenience of distinguishing, a virtual object in the game application program controlled by the terminal device is named as a first virtual object, and when the virtual scene is presented, one way is to present a partial virtual scene observed from the perspective of the first virtual object (namely, the first person perspective) in the full virtual scene; another way is to present a partial virtual scene of the full-scale virtual scene, viewed from a perspective of a third person, in which the first virtual object is entirely visible.
When the virtual scene is presented, a throwing operation control which is held by the first virtual object and tracks the prop is also presented, the type of the tracked prop is not limited in the embodiment of the application, for example, the type of the tracked prop can be a virtual prop which tracks the throwing of a grenade and the like, and also can be a virtual prop which tracks a firearm or tracks a shooting of a bullet and the like (for the case that the tracked prop is a virtual prop of the shooting, the throwing operation control can be updated to the shooting operation control); the form of presentation of the throwing operation control is also not limited, and may be in the form of a circular button, for example. The tracking prop may be equipped in advance for the first virtual object (for example, manually selected by a user before the game play begins), or may be picked up by the first virtual object in the virtual scene.
In addition to presenting the throwing operation control, more contents can be presented according to different practical application scenes, such as armor (such as body armor) currently equipped with the first virtual object or other virtual props (such as blasting grenades or flashing grenades).
In some embodiments, the above-described throwing operation control for presenting a virtual scene in a human-computer interaction interface and tracking props can be realized by the following ways: presenting a virtual scene and a switching operation control for tracking props in a human-computer interaction interface; in response to the triggering operation aiming at the switching operation control, controlling the first virtual object to switch the held virtual item into a tracking item; and presenting a throwing operation control for tracking the prop in a man-machine interaction interface.
Here, the tracked prop may be that the first virtual object does not hold by default, and if the virtual prop held by default by the first virtual object is a firearm, the switching operation control of the tracked prop is presented while the virtual scene is presented in the human-computer interaction interface. When receiving a trigger operation aiming at the switching operation control, such as a click operation, displaying a process of switching the virtual prop held by the first virtual object into a prop tracking process, and simultaneously displaying a throwing operation control for tracking the prop in a human-computer interaction interface, so that a user can throw the tracked prop through the throwing operation control. By the mode, the tracking prop and the throwing operation control are displayed when a certain condition is met, the method is suitable for tracking scenes with low use frequency of the prop (for example, the use frequency of firearms in a shooting game is greater than that of the tracking prop), and operation misguidance caused by continuous presentation of the throwing operation control can be avoided.
In some embodiments, before presenting the virtual scene and the switching operation control for tracking the prop in the human-computer interaction interface, the method further includes: presenting description information of a plurality of virtual props including the tracked props in a human-computer interaction interface; the above-mentioned switching operation control for presenting the virtual scene and tracking the prop in the human-computer interaction interface can be realized in such a way that: and responding to the selected operation for tracking the prop, presenting a virtual scene and a switching operation control for tracking the prop in the human-computer interaction interface.
Here, before presenting the virtual scene, the description information of the plurality of virtual items including the tracking item may be presented in a human-computer interaction interface, for example, a selection interface is presented before the game play begins, and the selection interface includes the description information of the plurality of virtual items, so that the user selects the virtual items to equip the first virtual object to enter the game play. The description information may include at least one of an injury value, a range, and a prop function. And when the selected operation aiming at the tracked prop is received, presenting a virtual scene and a switching operation control for tracking the prop in the human-computer interaction interface.
Of course, the above example does not limit the embodiment of the present application, and for example, after the virtual scene is presented, the description information of a plurality of virtual items including the tracked item may also be presented, such as presenting a mall interface including the description information of the plurality of virtual items, where the virtual items in the mall interface need to spend virtual resources to purchase. When a selected operation (such as a purchasing operation) aiming at the tracked prop is received, a switching operation control for tracking the prop is presented in the man-machine interaction interface. Through the mode, after the user selects the tracked prop, the switching operation control for tracking the prop is presented, the actual requirement of the user can be met, and the rationality of the presented content is improved.
In step 102, in response to a triggering operation for the throwing operation control, the first virtual object is controlled to throw the tracking prop in the virtual scene.
Here, the trigger operation may be a click operation, a long press operation, or the like, and is not limited. When the triggering operation for the throwing operation control is received, the first virtual object is controlled to throw and track the prop in the virtual scene, and here, a process of tracking the prop motion can be displayed in the human-computer interaction interface, for example, a process that the first virtual object throws the throwing-type tracking prop out is presented, or a process that the first virtual object shoots the shooting-type tracking prop out is presented.
In some embodiments, before controlling the first virtual object to throw the tracking prop in the virtual scene, further comprises: acquiring an initial position, an initial motion direction, an initial speed and an acceleration of a tracked prop to determine a simulated motion track of the tracked prop; and presenting the simulated motion trail in the human-computer interaction interface.
For example, the trigger operation for the throwing operation control includes a first trigger operation and a second trigger operation, the first trigger operation being a long press operation, the second trigger operation being an operation of releasing the long press. When receiving the first trigger operation to throwing operation control, obtain initial position, initial motion direction, initial velocity and the acceleration of tracking the stage property to combine actual physical law, determine the simulation motion trail of tracking the stage property, wherein, initial velocity can preset, and the acceleration includes gravity acceleration and the throwing acceleration towards initial motion direction, and throwing acceleration can preset equally. In the process of determining the simulated motion track of the tracked prop, some track points can be determined according to a set time interval or distance interval, and the simulated motion track is drawn according to the track points. Here, the end position of the simulated motion trajectory may be a collision position at which the simulated motion trajectory collides with a support surface or an obstacle in the virtual scene first.
It is worth to be noted that, a prop direction control (for example, a direction rocker for tracking a prop) may also be presented in the human-computer interaction interface, and during receiving a first trigger operation on the throwing operation control, the initial motion direction of the tracked prop may be correspondingly adjusted according to the received direction adjustment operation for the prop direction control, where the prop direction control may be located outside or inside the throwing operation control. In addition, an object direction control (such as a direction rocker for the first virtual object) can be presented in the human-computer interaction interface, and during receiving a first trigger operation on the throwing operation control, the initial position of the tracking prop can be correspondingly adjusted according to the received direction adjustment operation for the object direction control, wherein the object direction control is used for adjusting the position of the first virtual object, and the initial position of the tracking prop changes correspondingly along with the change of the position of the first virtual object.
Presenting a simulated motion trajectory in the human-computer interaction interface during receipt of a first trigger operation for the throwing operation control. When a second trigger operation for the throwing operation control is received, presenting a process that the first virtual object throws the tracking prop in the virtual scene so as to enable the tracking prop to move. Through the mode, the user can know the motion track after the prop is thrown and tracked in a preview mode, the initial position and the initial motion direction of the prop can be conveniently adjusted and tracked by the user in a targeted mode, and the human-computer interaction experience is improved.
In step 103, when the tracked prop reaches the target position and releases the special effect, at least one second virtual object included in the special effect release range is detected.
For whether the tracked prop reaches the target position, corresponding judgment conditions can be set, for example, when the duration of the tracked prop thrown out reaches the set throwing duration, the real-time position of the tracked prop is used as the target position; when the range of the tracked prop thrown out reaches the set range, the real-time position of the tracked prop is used as the target position.
When the tracked prop reaches the target position, the set special effect resource is called to release the special effect, namely the special effect is displayed in the virtual scene. Meanwhile, whether a second virtual object different from the first virtual object is included in the special effect release range is detected.
In step 104, the real-time position of the at least one second virtual object in the virtual scene is obtained, and the real-time position of the at least one second virtual object is presented in the tracking interface of the first virtual object.
When at least one second virtual object is included in the special effect release range, the real-time position of the at least one second virtual object in the virtual scene is acquired to be presented in the tracking interface of the first virtual object, wherein the tracking interface can include a thumbnail map of the virtual scene (a full-scale virtual scene or a partial virtual scene), and of course, can also include an actual map. Wherein the tracking interface of the first virtual object may be continuously presented (always presented), e.g. continuously presented throughout the game play; or when the fact that the special effect release range comprises at least one second virtual object is determined, the display is started, and when the real-time position of the at least one second virtual object is stopped being displayed, the display of the tracking interface is stopped at the same time.
In some embodiments, before presenting the real-time location of the at least one second virtual object, further comprising: and mapping the first real-time position of at least one second virtual object in the actual map according to the position mapping relation between the actual map of the virtual scene and the thumbnail map of the tracking interface to obtain a second real-time position for presenting in the thumbnail map.
For the case that the tracking interface includes a thumbnail map of a virtual scene and the obtained real-time position (named as a first real-time position for easy distinction) of at least one second virtual object in the actual map, the first real-time position may be mapped according to a position mapping relationship (i.e. a scale) between the actual map of the virtual scene and the thumbnail map to obtain a second real-time position for presentation in the thumbnail map. In this way, the accuracy of the finally presented second real-time position can be improved.
As shown in fig. 3A, in the embodiment of the application, the first virtual object can be controlled to throw the tracked prop, so that the second virtual object is tracked, individualization and flexibility of tracking are improved, and compared with a mode of uniformly presenting real-time positions of all virtual objects, computing resources can be saved, so that real-time performance of a virtual scene responding to a human-computer interaction operation is improved, and user experience is improved.
In some embodiments, referring to fig. 3B, fig. 3B is a schematic flowchart of a tracking method in a virtual scene provided in an embodiment of the present application, and step 103 shown in fig. 3A may be implemented by steps 201 to 204, which will be described with reference to the steps.
In step 201, when the tracked prop reaches the target position, a special effect radiation distance of the tracked prop is obtained.
Here, the special effect radiation distance of the tracked prop can be obtained by querying attribute data of the tracked prop, and the attribute data of the tracked prop can be preset.
In step 202, a circle having the target position as the center and the radius of the special-effect radiation distance is set as a special-effect release range, and any one of the special effect of particles and the special effect of smoke is released within the special-effect release range.
Here, the special effect release range may be a circle having the target position as the center and the special effect radiation distance as the radius, but this does not limit the embodiment of the present application, and may be, for example, a square having the target position as the center and the special effect radiation distance as the long side. The type of the specific effect of the release is not limited in the embodiments of the present application, and may be any one of a particle specific effect and a smoke specific effect, for example.
In step 203, real-time positions of a plurality of virtual objects in the virtual scene distinct from the first virtual object are obtained.
Here, the real-time positions of all virtual objects in the virtual scene that are different from the first virtual object (whether they have a cooperative relationship or a countermeasure relationship is not distinguished, that is, unified tracking) or the real-time positions of all virtual objects that are different from the first virtual object and have a countermeasure relationship with the first virtual object (only virtual objects having a countermeasure relationship are tracked), or the real-time positions of all virtual objects that are different from the first virtual object and have a cooperative relationship with the first virtual object (only virtual objects having a cooperative relationship are tracked) may be acquired depending on the actual application scene.
For example, in a game play, a plurality of virtual objects can be added to two competing groups (battle) respectively to obtain a group a and a group B, and then there is a cooperative relationship between any two virtual objects in the group a; there is a competing relationship between a virtual object within group a and a virtual object within group B.
In step 204, when the distance between the real-time position and the target position of any one of the plurality of virtual objects is less than or equal to the special effect radiation distance, any one of the virtual objects is identified as a second virtual object included in the special effect release range.
For example, after the real-time positions of a plurality of virtual objects distinguished from the first virtual object are acquired, the distance between each real-time position and the target position is calculated. When the calculated distance is less than or equal to the special effect radiation distance, a corresponding virtual object (referring to a virtual object different from the first virtual object) is recognized as a second virtual object included in the special effect release range.
It should be noted that, when the real-time positions of all the virtual objects different from the first virtual object are obtained in step 203, the second virtual object obtained in step 204 may include at least one of a virtual object having a cooperative relationship with the first virtual object and a virtual object having an antagonistic relationship with the first virtual object.
As shown in fig. 3B, in the embodiment of the present application, the target position is used as the center to release the special effect, so that the visual effect of human-computer interaction can be improved, and after the tracked prop is thrown, a user can accurately know whether the second virtual object is tracked.
In some embodiments, referring to fig. 3C, fig. 3C is a schematic flowchart of a tracking method in a virtual scene provided in this application, and step 103 shown in fig. 3A may be updated to step 301, and when a prop is tracked to reach a target position and a special effect is released, a second virtual object having an antagonistic relationship with the first virtual object included in a special effect release range is detected.
In the embodiment of the present application, the tracking function provided by the tracking prop may be to track a virtual object different from the first virtual object (without distinguishing whether the virtual object has a cooperative relationship or an antagonistic relationship), may be to track only a virtual object having a cooperative relationship with the first virtual object, or may track only a virtual object having an antagonistic relationship with the first virtual object. Here, the last case is exemplified, and when the tracking prop reaches the target position and releases the special effect, a virtual object having an antagonistic relationship with the first virtual object included in the special effect release range is detected, and for convenience of distinction, the detected virtual object is named as a second virtual object.
In fig. 3C, the step 104 shown in fig. 3A can be updated to step 302, in step 302, the real-time position of the second virtual object having the confrontational relationship in the virtual scene is obtained, and in the tracking interface of the first virtual object, the real-time position of the second virtual object having the confrontational relationship is presented.
Here, the real-time position of the second virtual object having the confrontational relationship may be continuously presented on the tracking interface of the first virtual object, or the presentation stop condition may be set for the real-time position of the second virtual object having the confrontational relationship, so that the timeliness of the presentation is limited.
In some embodiments, between any of the steps, further comprising: in the tracking interface of the first virtual object, the real-time position of at least part of the virtual object having a cooperative relationship with the first virtual object is continuously presented.
In the case of tracking only the virtual object having an antagonistic relationship with the first virtual object, the real-time position of at least part of the virtual object having a cooperative relationship with the first virtual object may be continuously presented in the tracking interface of the first virtual object. For example, in a virtual object having a cooperative relationship with the first virtual object, the real-time positions of all the virtual objects may be continuously presented, or the real-time positions of virtual objects having a distance from the first virtual object less than or equal to a set distance threshold may be continuously presented, or the real-time positions of at least one set virtual object may be continuously presented. Therefore, the user who controls the first virtual object can conveniently know the position of the teammates, so that a corresponding tactical strategy, such as a set strategy, is formulated, and the human-computer interaction experience is improved.
In fig. 3C, after step 301, a real-time location of a second virtual object having a confrontational relationship may also be presented in step 303 in a tracking interface of a virtual object having a cooperative relationship with the first virtual object.
In the embodiment of the present application, the tracked real-time position may also be shared, for example, the real-time position of the second virtual object having an antagonistic relationship is presented in the tracking interface of all the virtual objects having a cooperative relationship with the first virtual object. Therefore, the user of the teammate who controls the first virtual object can conveniently know the position of the opponent, so that corresponding tactical strategies such as a sudden attack strategy or a private strategy can be formulated, and the overall antagonism among different groups can be improved.
In fig. 3C, after step 303, in step 304, when the duration of presenting the real-time position of the second virtual object with confrontational relationship reaches the tracking duration, the presentation of the real-time position of the second virtual object with confrontational relationship is stopped.
Here, the stop presentation condition may be a tracking time length, which may be included in attribute data of the tracking prop. When the duration of the presentation of the real-time position of the second virtual object with the confrontational relationship reaches the tracking duration (i.e., the presentation stopping condition is satisfied), the presentation of the real-time position of the second virtual object is stopped.
Of course, this does not constitute a limitation on the condition for stopping presenting, and for example, it may be set that when the real-time position of presenting the second virtual object having the confrontational relationship exceeds the tracking range (may be included in the attribute data of the tracking prop), the real-time position of presenting the second virtual object is stopped; or when a second virtual object with an confrontational relationship is injured, stopping presenting the real-time position of the second virtual object; or stopping presenting the real-time position of the second virtual object in response to the operation that the second virtual object with the confrontational relationship uses the eliminating prop (for eliminating the tracking effect) corresponding to the tracking prop. The above-mentioned stop presenting condition may be optionally applied, or may be multiplexed, for example, the stop presenting condition may include both the tracking duration and the tracking range.
It should be noted that, for the tracking interface of the first virtual object itself and the tracking interface of the virtual object having a cooperative relationship with the first virtual object, the same condition for stopping presentation may be applied, so as to ensure consistency of the shared location.
As shown in fig. 3C, tracking the virtual object having an antagonistic relationship with the first virtual object can improve the overall antagonism between different groups, stimulate the human-computer interaction enthusiasm of the user, and is suitable for game competition including a plurality of groups.
In some embodiments, referring to fig. 3D, fig. 3D is a flowchart illustrating a tracking method in a virtual scene according to an embodiment of the present application, and step 104 shown in fig. 3A may be implemented by steps 401 to 404, which will be described with reference to the steps.
In step 401, a real-time position of at least one second virtual object in a virtual scene is obtained.
In step 402, when the real-time position of the at least one second virtual object is not presented in the tracking interface of the first virtual object, the tracking parameter corresponding to the tracking prop is associated to the at least one second virtual object.
In this embodiment, the default tracking function in the tracking interface of the first virtual object may be: 1) displaying a real-time location of a virtual object having a collaborative relationship with a first virtual object; 2) displaying a limited number of virtual objects, the virtual objects being virtual objects having a collaborative relationship and/or virtual objects having a confrontational relationship with the first virtual object, for example, displaying a virtual object having a most recent interaction time with the first virtual object within a set period of time (e.g., within 30 seconds from a current time); 3) displaying a virtual object within a limited range (e.g., the distance from the first virtual object is less than or equal to a set distance threshold), the virtual object being a virtual object having a cooperative relationship with the first virtual object and/or a virtual object having an antagonistic relationship; 4) displaying a virtual object within a limited time period, the virtual object being a virtual object having a collaborative relationship with the first virtual object and/or a virtual object having a confrontational relationship. One or more tracking functions can be applied to the tracking interface by default according to different actual application scenarios.
Here, the role of tracking the prop is to perform different-dimensional promotion and enhancement on the default tracking function of the tracking interface. When the real-time position of at least one second virtual object is not present in the tracking interface of the first virtual object at present, the tracking parameter corresponding to the tracking prop is associated to the at least one second virtual object, and the larger the tracking parameter is, the stronger the tracking effect is. The tracking parameters corresponding to the tracked props can be obtained by inquiring attribute data of the tracked props, namely, the tracking parameters are fixed and can also be dynamically changed according to actual scenes. In addition, the type of the tracking parameter may also be set according to the actual application scenario, for example, the type of the tracking parameter includes at least one of a tracking duration and a tracking range.
In step 403, when the real-time position of the at least one second virtual object is being presented in the tracking interface of the first virtual object, the tracking parameter corresponding to the tracking prop is superimposed on the tracking parameter with which the at least one second virtual object is associated.
When the real-time position of the at least one second virtual object is presented in the tracking interface, the at least one second virtual object is proved to be associated with the tracking parameter, so that the tracking parameter corresponding to the tracking prop is superposed to the tracking parameter associated with the at least one second virtual object. For example, if the real-time position of a certain second virtual object is being presented in the tracking interface, the remaining tracking time is 10 seconds, and the tracking time corresponding to the tracking prop is 30 seconds, the tracking time associated with the second virtual object may be obtained as 40 seconds by performing the superposition, and the method of superposing the tracking range is the same. Therefore, superposition of the tracking effect of the tracked prop can be achieved.
In another mode, when the real-time position of the at least one second virtual object is being presented in the tracking interface, the tracking parameters associated with the at least one second virtual object are directly overwritten according to the tracking parameters corresponding to the tracking props. For example, if the remaining tracking duration of a second virtual object is 10 seconds and the tracking duration corresponding to the tracked prop is 30 seconds, the second virtual object can be covered, and the tracking duration associated with the second virtual object is 30 seconds. Therefore, the refreshing of the tracking effect of the tracked prop can be realized. Any mode can be applied according to different actual application scenes.
In some embodiments, after step 401, further comprising: executing any one of the following processes to determine a tracking parameter corresponding to the tracking prop: determining a tracking parameter corresponding to the grade of the tracked prop; wherein, the value of the tracking parameter is positively correlated with the grade of the tracked prop; determining a tracking parameter positively correlated to a behavior index of the first virtual object; determining a tracking parameter positively correlated to the behavior index of the group to which the first virtual object belongs; the behavior index is any one of grade, killing number, game point and resource number.
In this embodiment of the present application, the tracking parameter corresponding to the tracked prop may be determined in any one of the following three ways:
1) and determining a tracking parameter corresponding to the grade of the tracked prop, wherein the higher the grade of the tracked prop is, the larger the corresponding tracking parameter is. For example, the tracking parameter is used as the tracking duration, the tracking duration corresponding to the tracked prop of the level 1 may be set to be 10 seconds, and the tracking duration corresponding to the tracked prop of the level 2 may be set to be 20 seconds. Therefore, the user can be encouraged to upgrade the tracking prop so as to obtain stronger tracking effect.
2) And determining a tracking parameter positively correlated to the behavior index of the first virtual object, wherein the tracking parameter dynamically changes along with the behavior index, and the behavior index comprises at least one of a level (such as a role level of the first virtual object), a killing number, a local score and a resource number. Taking the tracking parameter as the tracking duration and the behavior index as the level as an example, it may be set that when the level of the first virtual object is 1, the tracking duration corresponding to the tracking prop is 10 seconds; when the level of the first virtual object is 2, the tracking time corresponding to the tracking prop is 20 seconds. For another example, a linear or nonlinear relationship between the behavior index of the first virtual object and the tracking parameter may be specifically set, and the behavior index of the first virtual object may be calculated based on the linear or nonlinear relationship to obtain the tracking parameter. If the tracking parameters (such as tracking duration) = w1 × level + w2 × number of clicks + w3 × local integral + w4 × number of resources corresponding to the set tracking prop, weighting and summing are performed on each of the behavior indexes to obtain the tracking parameters, wherein w1, w2, w3, and w4 are numbers greater than 0.
Another way is to determine a ratio between the behavior index of the first virtual object and the behavior index of the second virtual object, and determine a tracking parameter positively correlated to the ratio, for example, when the ratio is 1 (i.e., 1: 1), the tracking duration corresponding to the tracked prop is set to be 10 seconds; when the ratio is 2, the tracking duration corresponding to the tracked prop is 20 seconds. For another example, the tracking parameter = w × the ratio corresponding to the tracking prop is set, and w is a number greater than 0.
3) And determining tracking parameters positively correlated to the behavior indexes of the group to which the first virtual object belongs, wherein the behavior indexes of the group can be obtained by performing fusion processing on the behavior indexes of all the virtual objects in the group, and the fusion processing mode can be averaging processing or summation processing and the like. The tracking parameters corresponding to different behavior indexes of the group can be preset, and the behavior indexes of the group can be calculated according to the linear or nonlinear relation between the behavior indexes of the group and the tracking parameters to obtain the tracking parameters.
Similarly, a ratio between the behavior index of the group to which the first virtual object belongs and the behavior index of the group to which the second virtual object belongs may be determined, and a tracking parameter positively correlated to the ratio may be determined. By means of the method, flexibility of determining the tracking parameters corresponding to the tracking props and applicability to different application scenes are improved.
In step 404, a real-time location of the at least one second virtual object is presented in the tracking interface of the first virtual object according to the tracking parameters associated with the at least one second virtual object.
In some embodiments, the tracking parameters include at least one of a tracking duration and a tracking range; the above-mentioned presenting of the real-time position of the at least one second virtual object in dependence of the tracking parameters associated with the at least one second virtual object may be achieved in such a way that: for each second virtual object, performing at least one of the following processes: presenting the real-time position of the second virtual object until the duration reaches the tracking duration associated with the second virtual object; the real-time location of the second virtual object is presented until the real-time location of the second virtual object exceeds a tracking range associated with the second virtual object.
Here, when the tracking parameter includes a tracking duration, for each second virtual object, the real-time position of the second virtual object is presented in the tracking interface of the first virtual object until the duration reaches the tracking duration, where the tracking duration may be an infinite duration or a finite duration, depending on the actual application scenario.
When the tracking parameters include a tracking range, for each second virtual object, the real-time position of the second virtual object is presented in the tracking interface of the first virtual object until the real-time position of the second virtual object exceeds the tracking range, wherein the tracking range can be an infinite range (for example, covering the full amount of virtual scenes) or a limited range, depending on the actual application scene. Through the mode, the flexibility of the tracking function of the tracked prop is improved.
As shown in fig. 3D, in the embodiment of the application, the default tracking function of the tracking interface is enhanced by tracking the tracking parameter corresponding to the prop, so that the user can be stimulated to use the tracking prop more in the virtual scene, and the user experience in the human-computer interaction process is improved.
In some embodiments, referring to fig. 3E, fig. 3E is a schematic flowchart of a tracking method in a virtual scene provided in this application, and after step 102 shown in fig. 3A, in step 501, a real-time position, a real-time movement direction, a real-time speed, and an acceleration of a tracked prop may also be periodically obtained to determine a movement track of the tracked prop.
Here, the real-time position, the real-time motion direction, the real-time speed, and the acceleration of the tracked prop in the virtual scene may be periodically obtained, so as to determine the motion trajectory of the tracked prop after being thrown, for example, the motion trajectory of the tracked prop is determined every other frame.
When the prop is tracked to move at the beginning, the obtained real-time position is the initial position of the prop when the triggering operation aiming at the throwing operation control is received; the obtained real-time motion direction is the initial motion direction of the tracked prop when the triggering operation aiming at the throwing operation control is received; the acquired real-time speed is a set initial speed; the obtained acceleration includes a gravitational acceleration and a throwing acceleration towards the initial movement direction, the gravitational acceleration can be adjusted by a physical engine in the human-computer interaction engine, the gravitational acceleration is not necessarily the gravitational acceleration in the real world, and the throwing acceleration can be preset. In the process of tracking the subsequent movement of the prop, the acquired acceleration comprises gravity acceleration.
In step 502, control tracks the prop to move along the trajectory of motion.
And for the motion trail determined each time, controlling the tracked prop to move along the motion trail, and simultaneously presenting the process of tracking the prop to move along the motion trail, thereby embodying the real visual effect of throwing the tracked prop by the first virtual object.
In fig. 3E, after step 502, in step 503, during the process of tracking the prop to move along the movement track, the camera assembly bound to the tracked prop periodically emits a detection ray from the real-time position of the tracked prop, the detection ray being in accordance with the real-time movement direction of the tracked prop.
Here, in the process of tracking the prop to move along the movement track, whether the tracked prop moves to the target position is periodically detected, for example, a detection ray which is consistent with the real-time movement direction of the tracked prop is periodically emitted from the real-time position of the tracked prop through a camera assembly bound on the tracked prop, and the length of the detection ray can be preset. Wherein, the period of emitting the detection ray and the period of determining the motion track of the tracking prop can be the same or different.
In step 504, a positional relationship between the detected ray and a plurality of impactor components in the virtual scene is determined.
In the embodiment of the application, whether a collision occurs or not can be detected through a collision device component, wherein the collision device component can be bound to a supporting surface or an obstacle in a virtual scene, the supporting surface is the ground, and the obstacle is a wall surface, a wooden box or a wooden barrel. When the tracking prop collides with the supporting surface, the tracking prop is determined to move to a target position; when the tracked prop collides with the barrier, the process of rebounding after the tracked prop collides is presented.
In step 505, it is determined whether the tracking prop reaches the target location based on the location relationship.
In some embodiments, the above-mentioned determination of whether the tracking prop reaches the target location based on the location relationship may be implemented by: when intersection exists between the detection ray and the collider component bound on the supporting surface, determining the real-time position of the tracked prop when the intersection exists as a target position; when the detection ray intersects with a collision device component bound on the barrier, determining that the tracking prop does not reach the target position; and when no intersection exists between the detection ray and the plurality of collider assemblies, determining that the tracking prop does not reach the target position.
Here, the positional relationship has the following several cases:
1) and the detection ray and the collider component bound on the supporting surface are crossed, in this case, the tracked prop is determined to be collided with the supporting surface, and the real-time position of the tracked prop when the cross exists is determined as the target position.
2) And the detection ray and the collision device component bound on the barrier are crossed, and under the condition, the tracking prop is determined not to reach the target position, and the process that the tracking prop rebounds on the barrier is presented.
3) And under the condition that no intersection exists between the detection ray and all the collider assemblies in the virtual scene, the motion track of the tracked prop is continuously and periodically determined, and the process that the tracked prop moves along the determined motion track is presented. Through the mode, whether the collision happens or not can be accurately judged based on the detection ray and the collider component, so that the collision process which is accurate and accords with the physical law is presented.
In some embodiments, when there is an intersection between the detection ray and the collisioner component bound to the obstacle, further comprising: acquiring a real-time position, a real-time motion direction, a real-time speed and an acceleration of a tracked prop to determine a rebound motion track of the tracked prop; and controlling the tracking prop to move along the rebound motion track.
Aiming at the condition that the detection ray and the collisioner component bound on the barrier are crossed, the real-time position, the real-time motion direction, the real-time speed and the acceleration of the tracked prop can be obtained, and the rebound motion track of the tracked prop after the collision with the barrier is determined by combining with the actual physical law. Then, the process of tracking the prop to move along the rebound movement track is presented, so that the accuracy and the rationalization degree of tracking the prop to move are improved.
As shown in fig. 3E, the embodiment of the present application can present a motion process conforming to an actual physical law by periodically determining the motion trajectory of the tracked prop; through confirming the position relation between detection ray and the collisioner subassembly, can judge accurately and track between stage property and the holding surface, or whether track whether bump takes place between stage property and the barrier.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described. For ease of understanding, a virtual scene in a shooting game is illustrated.
The embodiment of the application provides a tactical prop tracking, namely a tracking bomb (tracking grenade), wherein a user can control a current game role (corresponding to a first virtual object above) to throw the tracking bomb, so that an enemy (corresponding to the virtual object above with an confrontation relation) is actively tracked. For ease of understanding, a schematic diagram of the tracking method in the virtual scene as shown in fig. 4 is provided, and the steps will be described with reference to fig. 4:
1) the user selects a tracking bullet at a tactical gear interface (corresponding to the selection interface above) to gear the current game character, wherein the tactical gear interface can be presented before the game play begins or after the game play begins. As shown in fig. 5, the tactical gear interface includes a plurality of selectable virtual props, such as a flash bomb, a smoke bomb, a shocking bomb and a tracking bomb, and further includes description information of the virtual props, such as description information 51 of the tracking bomb in fig. 5.
2) If the tracking bullet is provided for the current game character equipment, a switching operation control of the tracking bullet, such as the switching operation control 61 shown in fig. 6, is displayed in the game interface (corresponding to the above human-computer interaction interface). The switching operation control 61 is used to switch a virtual item (such as a firearm) held by the current game character into a tracking bomb, and the switching operation control 61 may include the number of available tracking bombs left by the current game character, and the remaining 1 is taken as an example in fig. 6. The game interface also displays a shooting operation control 62 (two shooting operation controls are shown in fig. 6, the functions of the two shooting operation controls are the same, and the purpose of the shooting operation controls is to meet the operation habits of different users), also called firing keys, and the user can trigger the shooting operation control, so that the current game role is controlled to shoot according to the held gun prop. In addition, a small map (corresponding to the tracking interface above) is displayed in the game interface, such as the small map 63 in fig. 6, and the triangular mark in the small map 63 indicates the position of the current game character.
When receiving a trigger operation of a user on the switching operation control, such as a click operation, displaying a process of switching the virtual item held by the current game role into a tracking bullet in the game interface. As shown in fig. 7, a trace ball 71 and a throw operation control 72 obtained by switching the shooting operation control 62 in fig. 6 are shown, that is, in fig. 7, the firing key serves as a key for throwing the trace ball.
3) After switching to the tracking bomb, the user may press the firing key for a long time, and at this time, a simulated throwing track (corresponding to the simulated motion track) in a pre-throwing state is displayed in the game interface, for example, the simulated throwing track 81 in fig. 8, an end point of the simulated throwing track 81 is a drop point explosion position of the tracking bomb, so that the user may adjust the initial motion direction of the tracking bomb or the position of the current game character (which is equivalent to adjusting the initial position of the tracking bomb) according to the simulated throwing track 81, thereby increasing the hit rate of the tracking bomb.
The simulated throwing track is calculated according to the initial position, the initial motion direction, the initial speed, the gravity acceleration and the throwing acceleration of the tracking bomb and the actual physical law. In the bottom layer calculation process of the simulated throwing track, a plurality of waypoints (corresponding to the above track points) can be calculated according to a certain time interval or distance interval, such as the waypoint set 91 in fig. 9, a special effect line is stretched according to the plurality of waypoints in the waypoint set 91, and finally the simulated throwing track 92 in a parabolic shape is formed.
4) When the user stops pressing the firing key for a long time, namely releasing hands, presenting the process that the current game character throws the tracking bomb to fly the tracking bomb in the game interface, wherein the throwing track (corresponding to the motion track) of the tracking bomb can be periodically calculated according to the real-time position, the real-time motion direction, the real-time speed and the acceleration of the tracking bomb, and the process that the tracking bomb moves along the throwing track is presented.
In a shooting game, it is possible to set the tracking round to explode when it collides with the ground and to rebound when it collides with an obstacle (e.g., a wall surface). In the implementation of the bottom layer, detection rays consistent with the real-time movement direction of the tracking bomb can be periodically emitted at the real-time position of the tracking bomb, whether the detection rays are crossed with a collision box (corresponding to the collider component) bound on the ground or not is judged, and if the detection rays are crossed, the fact that the tracking bomb collides with the ground is determined, namely the tracking bomb reaches the target position is determined.
In the embodiment of the application, different materials can be set for different crash boxes, namely, whether the crash box is bound with the ground or an obstacle is set. In fig. 10, the ground 101 and the obstacles 102 and 103 placed on the ground 101 are used as examples, and the right side of fig. 10 shows some configuration parameters of the crash box of the ground 101, wherein the material 1011 is used to indicate that the crash box is bound to the ground. When the detection ray intersects with a certain collision box, the collision box can be determined to be bound on the ground or an obstacle by inquiring the material parameters of the collision box.
5) When the tracking bomb explodes, a special effect of particles or white smoke is presented, the special effect release range is the infection range (or tracking range), such as the infection range 111 in fig. 11, and all enemies in the infection range 111 can be infected.
In the bottom implementation, the landing position (corresponding to the above target position) of the tracer bomb explosion is obtained, such as the landing position 121 in fig. 12, and then a circle with the landing position 121 as the center and the set special effect radiation distance as the radius is determined as the infection range 122. And when the special effect is released, calculating the distance between the position of each enemy of the current game role and the landing position 121, and if the distance is less than or equal to the special effect radiation distance, determining that the corresponding enemy is infected.
6) For the infected enemy, the position of the infected enemy in the actual map is thrown to the small map of the current game role and the teammate of the current game role (corresponding to the virtual object with the cooperation relationship in the above) for displaying according to the position mapping relationship between the actual map of the virtual scene and the small map. For example, in fig. 13, the position of an infected enemy is indicated by a circle mark in the small map 131 of the current game character, and after the infected enemy moves, the position of the infected enemy in the small map 132 is updated accordingly. Here, a tracking time period may be set, and when the duration of the infection effect of the enemy reaches the tracking time period, the presentation of the position of the enemy is stopped in the small map of the current game character and its teammates.
According to the embodiment of the application, the individuation and the flexibility of tracking enemies in the shooting game can be improved, meanwhile, the computing resources can be saved, and the virtual scene can respond to the man-machine interaction operation more timely.
Continuing with the exemplary structure of the tracking device 455 in the virtual scene provided by the embodiment of the present application implemented as a software module, in some embodiments, as shown in fig. 2A, the software module in the tracking device 455 in the virtual scene stored in the memory 450 may include: a scene presenting module 4551, configured to present a virtual scene in the human-computer interaction interface and track a casting operation control of the prop; a throwing module 4552, configured to control the first virtual object to throw the tracking prop in the virtual scene in response to a triggering operation for the throwing operation control; a special effect release module 4553, configured to detect at least one second virtual object included in a special effect release range when the tracked prop reaches the target position and releases the special effect; the position presenting module 4554 is configured to obtain a real-time position of the at least one second virtual object in the virtual scene, and present the real-time position of the at least one second virtual object in the tracking interface of the first virtual object.
In some embodiments, when the tracked prop reaches the target location, the special effects release module 4553 is further configured to: obtaining a special effect radiation distance of a tracked prop; and taking a circle which takes the target position as the center and takes the special-effect radiation distance as the radius as a special-effect release range, and releasing any one of the special effects of the particles and the smoke in the special-effect release range.
In some embodiments, the special effects release module 4553 is further configured to: acquiring real-time positions of a plurality of virtual objects different from the first virtual object in the virtual scene; and when the distance between the real-time position and the target position of any one virtual object in the plurality of virtual objects is less than or equal to the special effect radiation distance, identifying the any one virtual object as a second virtual object included in the special effect release range.
In some embodiments, when a second virtual object having an antagonistic relationship with the first virtual object is included in the special effect release range, the position presentation module 4554 is further configured to: a real-time location of a second virtual object having a confrontational relationship is presented.
In some embodiments, the tracking device 455 in the virtual scene further comprises: and the presentation stopping module is used for stopping presenting the real-time position of the second virtual object with the confrontational relationship when the duration of presenting the real-time position of the second virtual object with the confrontational relationship reaches the tracking duration.
In some embodiments, the tracking device 455 in the virtual scene further comprises: and the sharing module is used for presenting the real-time position of the second virtual object with the confrontation relationship in a tracking interface of the virtual object with the first virtual object in the cooperative relationship.
In some embodiments, the tracking device 455 in the virtual scene further comprises: and the continuous presenting module is used for continuously presenting the real-time position of at least part of the virtual object which has a cooperative relation with the first virtual object in the tracking interface of the first virtual object.
In some embodiments, the location presentation module 4554 is further configured to: when the real-time position of the at least one second virtual object is not presented in the tracking interface, associating the tracking parameter corresponding to the tracking prop to the at least one second virtual object; when the real-time position of at least one second virtual object is presented in the tracking interface, the tracking parameter corresponding to the tracking prop is superposed to the tracking parameter associated with the at least one second virtual object; and presenting the real-time position of the at least one second virtual object according to the tracking parameters associated with the at least one second virtual object.
In some embodiments, the tracking parameters include at least one of a tracking duration and a tracking range; a location presentation module 4554, further configured to: for each second virtual object, performing at least one of the following processes: presenting the real-time position of the second virtual object until the duration reaches the tracking duration associated with the second virtual object; the real-time location of the second virtual object is presented until the real-time location of the second virtual object exceeds a tracking range associated with the second virtual object.
In some embodiments, the tracking device 455 in the virtual scene further comprises: the parameter determining module is used for executing any one of the following processes to determine a tracking parameter corresponding to the tracking prop: determining a tracking parameter corresponding to the grade of the tracked prop; wherein, the value of the tracking parameter is positively correlated with the grade of the tracked prop; determining a tracking parameter positively correlated to a behavior index of the first virtual object; determining a tracking parameter positively correlated to the behavior index of the group to which the first virtual object belongs; the behavior index comprises at least one of grade, killing number, game point and resource number.
In some embodiments, the scene rendering module 4551 is further configured to: presenting a virtual scene and a switching operation control for tracking props in a human-computer interaction interface; in response to the triggering operation aiming at the switching operation control, controlling the first virtual object to switch the held virtual item into a tracking item; and presenting a throwing operation control for tracking the prop in a man-machine interaction interface.
In some embodiments, the tracking device 455 in the virtual scene further comprises: the information presentation module is used for presenting description information of a plurality of virtual props including the tracked props in a human-computer interaction interface; and the scene presenting module 4551 is further configured to present a virtual scene and a switching operation control for tracking the prop in the human-computer interaction interface in response to the selected operation for tracking the prop.
In some embodiments, the tracking device 455 in the virtual scene further comprises: the motion module is used for periodically acquiring the real-time position, the real-time motion direction, the real-time speed and the acceleration of the tracked prop so as to determine the motion track of the tracked prop; and controlling the tracking prop to move along the motion track.
In some embodiments, in tracking the movement of the prop along the movement trajectory, the movement module is further configured to: periodically emitting detection rays consistent with the real-time movement direction of the tracked prop from the real-time position of the tracked prop through a camera assembly bound on the tracked prop; determining a positional relationship between the detected ray and a plurality of collider components in the virtual scene; and determining whether the tracking prop reaches the target position based on the position relation.
In some embodiments, the motion module is further to: when intersection exists between the detection ray and the collider component bound on the supporting surface, determining the real-time position of the tracked prop when the intersection exists as a target position; when the detection ray intersects with a collision device component bound on the barrier, determining that the tracking prop does not reach the target position; and when no intersection exists between the detection ray and the plurality of collider assemblies, determining that the tracking prop does not reach the target position.
In some embodiments, the motion module is further configured to, when there is an intersection between the detection ray and the impactor assembly bound to the obstacle: acquiring a real-time position, a real-time motion direction, a real-time speed and an acceleration of a tracked prop to determine a rebound motion track of the tracked prop; and controlling the tracking prop to move along the rebound motion track.
In some embodiments, the tracking device 455 in the virtual scene further comprises: the preview module is used for acquiring an initial position, an initial motion direction, an initial speed and an acceleration of the tracked prop so as to determine a simulated motion track of the tracked prop; and presenting the simulated motion trail in the human-computer interaction interface.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the tracking method in the virtual scene described above in this embodiment.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to perform a method provided by embodiments of the present application, for example, a tracking method in a virtual scene as illustrated in fig. 3A, fig. 3B, fig. 3C, fig. 3D, or fig. 3E. Note that the computer includes various computing devices including a terminal device and a server.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (18)

1. A method for tracking in a virtual scene, the method comprising:
presenting a virtual scene and a throwing operation control for tracking props in a human-computer interaction interface;
in response to a triggering operation for the throwing operation control, controlling a first virtual object to throw the tracking prop in the virtual scene;
when the tracked prop reaches the target position and releases the special effect, detecting at least one second virtual object included in a special effect release range;
acquiring a real-time position of the at least one second virtual object in the virtual scene;
when the real-time position of the at least one second virtual object is not presented in the tracking interface of the first virtual object, associating the tracking parameter corresponding to the tracking prop to the at least one second virtual object; when the real-time position of the at least one second virtual object is presented in the tracking interface, overlaying the tracking parameter corresponding to the tracking prop to the tracking parameter associated with the at least one second virtual object;
presenting a real-time location of the at least one second virtual object in accordance with the tracking parameters with which the at least one second virtual object has been associated;
wherein the tracking parameter satisfies at least one of the following conditions: the tracking parameter is positively correlated with the grade of the tracking prop; the tracking parameter is positively correlated with a behavior index of the first virtual object; the tracking parameter is positively correlated with the behavior index of the group to which the first virtual object belongs;
the behavior index comprises at least one of grade, killing number, game point and resource number.
2. The method of claim 1, when the tracking prop reaches the target location, further comprising:
obtaining a special effect radiation distance of the tracking prop;
and taking a circle which takes the target position as a center and takes the special-effect radiation distance as a radius as a special-effect release range, and releasing any one of the special effects of the particles and the smoke in the special-effect release range.
3. The method of claim 2, wherein the detecting at least one second virtual object included within the special effect release range comprises:
acquiring real-time positions of a plurality of virtual objects in the virtual scene, which are different from the first virtual object;
and when the distance between the real-time position of any one of the plurality of virtual objects and the target position is smaller than or equal to the special effect radiation distance, identifying the any one virtual object as a second virtual object included in the special effect release range.
4. The method of claim 1,
when a second virtual object having an antagonistic relationship with the first virtual object is included within the special effect release range, the presenting the real-time position of the at least one second virtual object includes:
and presenting the real-time position of the second virtual object with the confrontational relationship.
5. The method of claim 4, further comprising:
when the duration of the real-time position of the second virtual object with the confrontational relationship is presented reaches the tracking duration, the real-time position of the second virtual object with the confrontational relationship is stopped being presented.
6. The method of claim 4, further comprising:
presenting, in a tracking interface of a virtual object having a collaborative relationship with the first virtual object, a real-time location of the second virtual object having an antagonistic relationship.
7. The method of claim 4, further comprising:
continuously presenting, in a tracking interface of the first virtual object, a real-time location of at least a portion of the virtual object in a collaborative relationship with the first virtual object.
8. The method of claim 1, wherein the tracking parameters comprise at least one of tracking duration and tracking range;
said presenting a real-time location of said at least one second virtual object in accordance with tracking parameters associated with said at least one second virtual object, comprising:
for each of the second virtual objects, performing at least one of:
presenting the real-time position of the second virtual object until a duration reaches a tracking duration associated with the second virtual object;
presenting the real-time location of the second virtual object until the real-time location of the second virtual object exceeds a tracking range associated with the second virtual object.
9. The method of any one of claims 1 to 7, wherein the presenting of the virtual scene and the throwing operation control for tracking the prop in the human-computer interaction interface comprises:
presenting a virtual scene and a switching operation control of the tracking prop in the human-computer interaction interface;
in response to a trigger operation for the switching operation control, controlling the first virtual object to switch the held virtual prop to the tracking prop;
and presenting the throwing operation control for tracking the prop in the man-machine interaction interface.
10. The method of claim 9,
before the presenting of the virtual scene in the human-computer interaction interface and the switching operation control for tracking the prop, the method further includes:
presenting description information of a plurality of virtual props including the tracking prop in the human-computer interaction interface;
the switching operation control for presenting the virtual scene and the tracked prop in the human-computer interaction interface comprises:
and in response to the selected operation for the tracked prop, presenting a virtual scene and a switching operation control of the tracked prop in the human-computer interaction interface.
11. The method of any of claims 1-7, wherein the controlling the first virtual object further comprises, after throwing the tracking prop in the virtual scene:
periodically acquiring the real-time position, the real-time motion direction, the real-time speed and the acceleration of the tracked prop to determine the motion track of the tracked prop;
and controlling the tracking prop to move along the motion track.
12. The method of claim 11, wherein during the movement of the tracking prop along the trajectory of motion, further comprising:
periodically emitting detection rays consistent with the real-time movement direction of the tracking prop from the real-time position of the tracking prop through a camera assembly bound on the tracking prop;
determining a positional relationship between the detection ray and a plurality of collider components in the virtual scene;
and determining whether the tracking prop reaches a target position based on the position relation.
13. The method of claim 12, wherein said determining whether the tracking prop reaches a target location based on the positional relationship comprises:
when the detection ray intersects with a collision device assembly bound on a supporting surface, determining the real-time position of the tracking prop when the intersection exists as a target position;
when the detection ray intersects with a collision device component bound on an obstacle, determining that the tracking prop does not reach a target position;
when no intersection exists between the detection ray and the plurality of collider components, determining that the tracking prop does not reach the target position.
14. The method of claim 13, when there is an intersection between the detection ray and a collisional component bound to an obstacle, further comprising:
acquiring the real-time position, the real-time motion direction, the real-time speed and the acceleration of the tracked prop to determine the rebound motion track of the tracked prop;
and controlling the tracking prop to move along the rebound motion track.
15. The method of any of claims 1-7, wherein said controlling the first virtual object to throw the tracking prop in the virtual scene further comprises:
acquiring an initial position, an initial motion direction, an initial speed and an acceleration of the tracked prop to determine a simulated motion track of the tracked prop;
and presenting the simulated motion trail in the human-computer interaction interface.
16. An apparatus for tracking in a virtual scene, the apparatus comprising:
the scene presenting module is used for presenting a virtual scene in a human-computer interaction interface and tracking a throwing operation control of a prop;
a throwing module, configured to control a first virtual object to throw the tracking prop in the virtual scene in response to a triggering operation for the throwing operation control;
the special effect releasing module is used for detecting at least one second virtual object included in a special effect releasing range when the tracking prop reaches a target position and releases a special effect;
the position presenting module is used for acquiring the real-time position of the at least one second virtual object in the virtual scene, and associating the tracking parameter corresponding to the tracking prop to the at least one second virtual object when the real-time position of the at least one second virtual object is not presented in the tracking interface of the first virtual object; when the real-time position of the at least one second virtual object is presented in the tracking interface, overlaying the tracking parameter corresponding to the tracking prop to the tracking parameter associated with the at least one second virtual object;
presenting a real-time location of the at least one second virtual object in accordance with the tracking parameters with which the at least one second virtual object has been associated;
wherein the tracking parameter satisfies at least one of the following conditions: the tracking parameter is positively correlated with the grade of the tracking prop; the tracking parameter is positively correlated with a behavior index of the first virtual object; the tracking parameter is positively correlated with the behavior index of the group to which the first virtual object belongs;
the behavior index comprises at least one of grade, killing number, game point and resource number.
17. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the tracking method in a virtual scene of any one of claims 1 to 15 when executing executable instructions stored in the memory.
18. A computer-readable storage medium storing executable instructions for implementing the tracking method in a virtual scene according to any one of claims 1 to 15 when executed by a processor.
CN202011057660.5A 2020-09-29 2020-09-29 Tracking method and device in virtual scene, electronic equipment and storage medium Active CN112121414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011057660.5A CN112121414B (en) 2020-09-29 2020-09-29 Tracking method and device in virtual scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011057660.5A CN112121414B (en) 2020-09-29 2020-09-29 Tracking method and device in virtual scene, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112121414A CN112121414A (en) 2020-12-25
CN112121414B true CN112121414B (en) 2022-04-08

Family

ID=73843335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011057660.5A Active CN112121414B (en) 2020-09-29 2020-09-29 Tracking method and device in virtual scene, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112121414B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113680061B (en) * 2021-09-03 2023-07-25 腾讯科技(深圳)有限公司 Virtual prop control method, device, terminal and storage medium
CN113713383B (en) * 2021-09-10 2023-06-27 腾讯科技(深圳)有限公司 Throwing prop control method, throwing prop control device, computer equipment and storage medium
CN113750530B (en) * 2021-09-18 2023-07-21 腾讯科技(深圳)有限公司 Prop control method, device, equipment and storage medium in virtual scene
CN117160040A (en) * 2022-05-25 2023-12-05 腾讯科技(深圳)有限公司 Virtual character searching method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107930123A (en) * 2017-12-15 2018-04-20 玖万里网络科技(上海)有限公司 Collision system and its information processing method
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN110215706A (en) * 2019-06-20 2019-09-10 腾讯科技(深圳)有限公司 Location determining method, device, terminal and the storage medium of virtual objects
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111265869A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Virtual object detection method, device, terminal and storage medium
CN111282275A (en) * 2020-03-06 2020-06-16 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for displaying collision traces in virtual scene
CN111359207A (en) * 2020-03-09 2020-07-03 腾讯科技(深圳)有限公司 Operation method and device of virtual prop, storage medium and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107930123A (en) * 2017-12-15 2018-04-20 玖万里网络科技(上海)有限公司 Collision system and its information processing method
CN109200582A (en) * 2018-08-02 2019-01-15 腾讯科技(深圳)有限公司 The method, apparatus and storage medium that control virtual objects are interacted with ammunition
CN110215706A (en) * 2019-06-20 2019-09-10 腾讯科技(深圳)有限公司 Location determining method, device, terminal and the storage medium of virtual objects
CN111035918A (en) * 2019-11-20 2020-04-21 腾讯科技(深圳)有限公司 Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN111265869A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Virtual object detection method, device, terminal and storage medium
CN111282275A (en) * 2020-03-06 2020-06-16 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for displaying collision traces in virtual scene
CN111359207A (en) * 2020-03-09 2020-07-03 腾讯科技(深圳)有限公司 Operation method and device of virtual prop, storage medium and electronic device

Also Published As

Publication number Publication date
CN112121414A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN113181650B (en) Control method, device, equipment and storage medium for calling object in virtual scene
CN112090069B (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN112090070B (en) Interaction method and device of virtual props and electronic equipment
JP7447296B2 (en) Interactive processing method, device, electronic device and computer program for virtual tools
CN112295230B (en) Method, device, equipment and storage medium for activating virtual props in virtual scene
CN113633964B (en) Virtual skill control method, device, equipment and computer readable storage medium
US20230072503A1 (en) Display method and apparatus for virtual vehicle, device, and storage medium
CN112057863A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112138385B (en) Virtual shooting prop aiming method and device, electronic equipment and storage medium
CN112295228B (en) Virtual object control method and device, electronic equipment and storage medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN112121434A (en) Interaction method and device of special effect prop, electronic equipment and storage medium
CN111921198A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113457151A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113703654B (en) Camouflage processing method and device in virtual scene and electronic equipment
CN112156472B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112121433B (en) Virtual prop processing method, device, equipment and computer readable storage medium
CN113440853B (en) Control method, device, equipment and storage medium for virtual skill in virtual scene
CN113663329B (en) Shooting control method and device for virtual character, electronic equipment and storage medium
CN116726499A (en) Position transfer method, device, equipment and storage medium in virtual scene
CN115634449A (en) Method, device, equipment and product for controlling virtual object in virtual scene
CN114288678A (en) Interactive processing method and device for virtual scene, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant