CN112057864A - Control method, device and equipment of virtual prop and computer readable storage medium - Google Patents

Control method, device and equipment of virtual prop and computer readable storage medium Download PDF

Info

Publication number
CN112057864A
CN112057864A CN202010956339.4A CN202010956339A CN112057864A CN 112057864 A CN112057864 A CN 112057864A CN 202010956339 A CN202010956339 A CN 202010956339A CN 112057864 A CN112057864 A CN 112057864A
Authority
CN
China
Prior art keywords
virtual
target
prop
control
presenting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010956339.4A
Other languages
Chinese (zh)
Inventor
姚丽
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010956339.4A priority Critical patent/CN112057864A/en
Publication of CN112057864A publication Critical patent/CN112057864A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8029Fighting without shooting
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Abstract

The application provides a control method, a device, equipment and a computer readable storage medium of a virtual item; the method comprises the following steps: presenting an operation control of the target virtual item in a picture of the virtual scene; in response to the triggering operation aiming at the operation control, controlling a virtual object in the virtual scene to project the target virtual prop; when the target virtual prop falls to a first target position, presenting a process that the target virtual prop explodes into at least two sub virtual props and generates virtual substances; the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located; and when the child virtual prop falls to a second target position, presenting the process that the child virtual prop explodes to generate a virtual substance. By the method and the device, the interaction efficiency of the interactive operation in the virtual scene can be improved.

Description

Control method, device and equipment of virtual prop and computer readable storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for controlling a virtual item.
Background
In most applications of virtual scenes, in order to shield the visual line of a virtual object, some virtual props similar to smoke bombs are introduced. In the related technology, a user controls a virtual object to project a smoke bomb by triggering an operation control of the smoke bomb, which is a virtual prop, presented by a terminal, and the smoke bomb explodes to the ground to generate smoke to shield the sight of the virtual object. However, the smoke diffusion area generated by the explosion of one smoke bomb is limited, and if the virtual object is dispersed in a larger area range, a user needs to operate and launch multiple smoke bombs for multiple times to enable the smoke generated by the explosion to cover the area where the virtual object is located, so that the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for controlling a virtual item and a computer-readable storage medium, which can improve the interaction efficiency of interactive operation in a virtual scene.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a control method of a virtual prop, which comprises the following steps:
presenting an operation control of the target virtual item in a picture of the virtual scene;
in response to the triggering operation aiming at the operation control, controlling a virtual object in the virtual scene to project the target virtual prop;
when the target virtual prop falls to a first target position, presenting a process that the target virtual prop explodes into at least two sub virtual props and generates virtual substances;
the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located;
and when the child virtual prop falls to a second target position, presenting the process that the child virtual prop explodes to generate a virtual substance.
The embodiment of the application provides a controlling means of virtual stage property, includes:
the first presentation module is used for presenting an operation control of the target virtual item in a picture of a virtual scene;
the control module is used for responding to the triggering operation aiming at the operation control and controlling a virtual object in the virtual scene to project the target virtual prop;
the second presentation module is used for presenting the process that the target virtual prop explodes into at least two sub-virtual props and generates virtual substances when the target virtual prop falls to the first target position;
the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located;
and the third presentation module is used for presenting the process that the sub virtual prop explodes and cracks to generate a virtual substance when the sub virtual prop falls to the second target position.
In the above solution, the apparatus further includes a selection module, where the selection module is configured to, before the operation control for presenting the target virtual item in the screen of the virtual scene,
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
responding to the selection operation of an operation control in the selection interface, and presenting indication information of a virtual item corresponding to the selected operation control, wherein the indication information is used for indicating the function of the virtual item;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
In the above scheme, the first presentation module is further configured to present, in a picture of a virtual scene, a cooling time of an operation control corresponding to the target virtual item;
when the cooling time is over, displaying an operation control of the target virtual prop by adopting a target display style;
and the target display style is used for representing that the operation control of the target virtual prop is in an activated state.
In the above scheme, the first presentation module is further configured to present an attack result obtained by the virtual object attacking the target object;
shortening the cooling time when the attack performance reaches a performance threshold.
In the above scheme, the second presentation module is further configured to present a process that the target virtual prop explodes into at least two sub virtual props at the first target location, and the at least two sub virtual props randomly move in different directions under the force generated by the explosion;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
In the above scheme, the second presenting module is further configured to present a process that the target virtual prop explodes into at least two sub virtual props at the first target position, and the at least two sub virtual props move to corresponding second target positions along a target trajectory under a force generated by the explosion;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
In the above solution, the apparatus further includes a trajectory determination module, configured to determine, in response to a selection operation for at least two burst locations in the target area centered on the first target location, the selected burst location as a second target location;
and determining at least two target tracks which take the first target position as a starting point and each second target position as a landing point.
In the above scheme, the third presenting module is further configured to present a process in which the sub virtual prop explodes to generate a virtual substance and the virtual substance spreads to the surrounding space around the second target position.
In the above scheme, the apparatus further includes a fourth presentation module, where the fourth presentation module is configured to determine a connection line between a position of the virtual object and a position of a target object after the process of presenting the virtual object to generate a virtual substance is performed, where the target object is an attack object of the virtual object;
when the connecting line passes through the area of the virtual substance, displaying the target object in a target display mode;
the target display style is used for improving the visibility of the virtual object aiming at the target object.
In the above scheme, the apparatus further includes a position relationship determining module, where the position relationship determining module is configured to, after determining a connection line between the position of the virtual object and the position of the target object, obtain a center position and a radius of an area plane where the virtual substance is located;
determining the position relation between the connecting line and the area where the virtual substance is located based on the central position and the radius;
when the position relation is an intersection relation, determining that the connecting line passes through the area where the virtual substance is located.
In the above scheme, the fourth presenting module is further configured to display the target object and the virtual object differently when both the target object and the virtual object are located in the area where the virtual substance is located;
wherein the target object is an attack object of the virtual object.
In the above scheme, the apparatus further includes a fifth presentation module, where the fifth presentation module is configured to, after the process of presenting that the child virtual prop explodes to generate a virtual substance, present a process that the virtual object moves at a first rate in an area where the virtual substance is located;
when the virtual object is attacked by the target object, presenting a process that the virtual object moves at a second rate;
wherein the target object is an attack object of the virtual object, and the second rate is greater than the first rate.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the control method of the virtual prop provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The embodiment of the present application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the computer-readable storage medium, so as to implement the control method for the virtual prop provided in the embodiment of the present application.
The embodiment of the application has the following beneficial effects:
when a user triggers an operation control aiming at a target virtual prop, the virtual object is controlled to project the target virtual prop, when the target virtual prop falls to a first target position, the target virtual prop explodes into a plurality of sub virtual props and generates virtual substances, when the sub virtual prop falls to a second target position, the sub virtual props also explode to generate the virtual substances, therefore, double explosion of the virtual prop is realized through one triggering aiming at the operation control, when the virtual prop explodes to generate the virtual substances, the sub virtual props generated based on the explosion of the virtual prop also explode to generate the virtual substances, compared with a mode that only the virtual prop generates the virtual substances, the amount of the generated virtual substances is greatly improved, the coverage range of the virtual substances is enlarged, and the interaction efficiency of the interactive operation implemented based on the virtual prop is improved, and the interactive experience of the user in performing interactive operation in the virtual scene based on the virtual prop is improved.
Drawings
Fig. 1 is an alternative architecture diagram of a control system of a virtual prop according to an embodiment of the present disclosure;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
FIGS. 4A-4C are schematic diagrams of screen displays provided by embodiments of the present application;
FIGS. 5A-5F are schematic diagrams of display interfaces provided by embodiments of the present application;
fig. 6 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
fig. 7 is an optional schematic flow chart of a method for controlling a virtual prop according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a control device of a virtual item according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side, and the application program running in the terminal for providing various services, such as a video playing client side, an instant messaging client side, a live broadcast client side, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on a terminal, and the virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
For example, when the virtual scene is a three-dimensional virtual space, the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, for example, buildings, vehicles, or props such as weapons required for arming themselves or fighting with other virtual objects in the virtual scene, and the virtual scene may also be used to simulate real environments in different weathers, for example, weather such as sunny days, rainy days, foggy days, or dark nights. The user may control the movement of the virtual object in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
6) The smog bomb refers to a throwing type tactical prop applied to a virtual scene, smog can be released after the smog bomb is thrown and acts, the smog can be generated gradually along with the lapse of time, the smog can be gradually dissipated after the smog generating range reaches the maximum, the smog finally disappears, the smog shielding effect at the smog center is the best, the edge shielding effect is the worst, the conditions of each other cannot be observed by the inside and the outside during the existence of the smog, and meanwhile, a shooting source and a ray tracing track can also be shielded.
Referring to fig. 1, fig. 1 is an optional architecture diagram of a control system 100 for a virtual item provided in this embodiment of the present application, in order to support an exemplary application, a terminal 400 (an exemplary terminal 400-1 and a terminal 400-2 are shown) is connected to a server 200 through a network 300, where the network 300 may be a wide area network or a local area network, or a combination of the two, and data transmission is implemented using a wireless link.
The terminal 400 may be various types of user terminals such as a smart phone, a tablet computer, a notebook computer, and the like, and may also be a desktop computer, a game machine, a television, or a combination of any two or more of these data processing devices; the server 200 may be a single server configured to support various services, may also be configured as a server cluster, may also be a cloud server, and the like.
In practical implementation, the terminal 400 is installed and operated with an application program supporting a Virtual scene, where the application program may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a Multiplayer Online tactical sports game (MOBA), a Virtual Reality (VR) application program, a Three-dimensional (3D) map program, an Augmented Reality (AR) application program, a military simulation program, or a Multiplayer gunfight survival game. The user uses the terminal 400 to operatively control the virtual objects located in the virtual scene to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
In an exemplary scenario, the virtual object controlled by the terminal 400-1 and the target object controlled by the other terminal 400-2 are in the same virtual scenario, and the virtual object may interact with the target object in the virtual scenario. In some embodiments, the virtual object and the target object may be in a hostile relationship, for example, the virtual object and the target object belong to different teams and organizations, and between the virtual objects in the hostile relationship, the virtual objects may be in a antagonistic interaction on the land in a manner of shooting each other.
In an exemplary scenario, in a shooting game application, when the terminal 400 controls a virtual object to attack a target object, a picture of the virtual scene observed from the virtual scene at a virtual object view angle is presented on the terminal, and an operation control of a target virtual item is presented in the picture; responding to the trigger operation aiming at the operation control, and controlling a virtual object in the virtual scene to project a target virtual prop; when the target virtual prop falls to a first target position, presenting a process that the target virtual prop explodes into at least two sub-virtual props and generates virtual substances; the virtual material is used for reducing the visibility of a first virtual object in the virtual scene in the area where the virtual material is located; when the child virtual prop falls to the second target position, the process that the child virtual prop explodes to generate the virtual substance is presented, so that the sight of the target object in the area where the virtual substance is located is shielded, and the shooting source and the ray tracing track are shielded.
In an exemplary scene, in a military virtual simulation application, virtual scene technology is adopted to enable a trainee to visually and aurally experience a battlefield environment, to be familiar with the environmental characteristics of an area to be battled, to interact with objects in the virtual environment through necessary equipment, and a virtual battlefield environment implementation method can create a three-dimensional battlefield environment with a dangerous image ring life and a near reality through background generation and image synthesis through a corresponding three-dimensional battlefield environment graphic image library comprising a battlefield background, a battlefield scene, various weaponry, fighters and the like. In actual implementation, when a terminal controls a virtual object (such as a simulated fighter) to shield an area (such as a simulated enemy B playground of school A) where the target object (such as the simulated enemy) is located, a picture of a virtual scene obtained by observing the virtual scene from a virtual object view angle is presented on the terminal, and an operation control of a target virtual prop is presented in the picture; responding to the triggering operation of the operation control aiming at the target virtual prop, and controlling a virtual object in the virtual scene to project the target virtual prop; when the target virtual prop falls to a first target position (such as a center position C of a playground B), presenting a process that the target virtual prop explodes into at least two sub virtual props and generates a virtual substance; the virtual material is used for reducing the visibility of a first virtual object in the virtual scene in the area where the virtual material is located; when the sub virtual prop falls to a second target position (such as a corner D of the playground B), a process that the sub virtual prop bursts to generate a virtual substance is presented, so that the sight of the target object in the area where the virtual substance is located (the whole playground B) is shielded, namely, the virtual object and the target object outside and inside cannot observe each other during the period that the virtual substance exists in the whole playground B, and meanwhile, a design source and a ray tracing track can be shielded.
Referring to fig. 2, fig. 2 is an optional structural schematic diagram of an electronic device 500 provided in the embodiment of the present application, in an actual application, the electronic device 500 may be the terminal 400 or the server 200 in fig. 1, and a computer device for implementing the method for controlling a virtual item in the embodiment of the present application is described by taking the electronic device as the terminal 400 shown in fig. 1 as an example. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the control device of the virtual prop provided in this embodiment may be implemented in a software manner, and fig. 2 illustrates a control device 555 of the virtual prop stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the first rendering module 5551, the second rendering module 5552, the control module 5553 and the third rendering module 5554 are logical and thus may be arbitrarily combined or further split according to the implemented functions.
The functions of the respective modules will be explained below.
In other embodiments, the control Device of the virtual prop provided in this embodiment may be implemented in hardware, and as an example, the control Device of the virtual prop provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the control method of the virtual prop provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic elements.
Next, a description is given of a control method of the virtual item provided in this embodiment, where in actual implementation, the control method of the virtual item provided in this embodiment may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in a cooperation manner.
Referring to fig. 3, fig. 3 is an optional flowchart of a method for controlling a virtual item provided in the embodiment of the present application, and the steps shown in fig. 3 will be described.
Step 101: and the terminal presents the operation control of the target virtual prop in the picture of the virtual scene.
In practical application, an application program supporting a virtual scene is installed on a terminal, when a user opens the application program on the terminal and the terminal runs the application program, the user can perform touch operation on the terminal, after the terminal detects the touch operation of the user, scene data of the virtual scene is acquired in response to the touch operation, a picture of the virtual scene is rendered based on the scene data of the virtual scene, and the rendered picture of the virtual scene is presented on the terminal.
Here, the frame of the virtual scene may be obtained by observing the virtual scene at a first person object viewing angle, or obtained by observing the virtual scene at a third person object viewing angle, where the frame of the virtual scene presents an interactive object and an object interactive environment in addition to the operation control presenting the target virtual item, for example, the virtual object and the target object in an opponent relationship interact with each other in the virtual scene.
In some embodiments, before the operation control of the target virtual item is presented in the screen of the virtual scene, the operation control of the target virtual item may also be determined by:
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene; responding to the selection operation of the operation control in the selection interface, and presenting indication information of the virtual prop corresponding to the selected operation control, wherein the indication information is used for indicating the function of the virtual prop; and in response to the determination operation for the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
Here, before the terminal presents the picture of the virtual scene or in the process of presenting the picture of the virtual scene, the terminal may present a selection interface for selecting the item, where the selection interface includes at least one operation control of the virtual item, where the operation control is an icon corresponding to the virtual item that can be used in the virtual scene, the selection interface may be a display interface occupying the whole terminal, or may be a partial display interface occupying the whole display interface of the terminal, and for example, the selection interface may also be suspended on the object interaction interface. When the user triggers the operation control of the target virtual prop in the selection interface, the indication information of the virtual prop corresponding to the selected operation control is presented, so that the user can know the function of the virtual prop corresponding to the selected operation control.
Referring to fig. 4A-4B, fig. 4A-4B are schematic diagrams of screen display provided in this embodiment of the present application, in fig. 4A, an operation control of a plurality of virtual items is presented on a selection interface a0, when a user triggers an operation control a1, indication information a2 of a virtual item corresponding to an operation control a1 is presented, the user can know a function of a virtual item corresponding to an operation control a1 through the indication information a2, when the user triggers a determination function item A3, the terminal determines a selected operation control a1 as an operation control of a target virtual item, and presents the operation control as a screen of a virtual scene shown in fig. 4B, and in fig. 4B, a screen of the virtual scene presents a target virtual item B1 corresponding to the selected operation control a 1.
In practical applications, in general, the operation control of the target virtual item that has just been selected to enter the virtual scene is by default unusable, that is, the operation control of the target virtual item that has just been selected to enter the virtual scene is in an inactive state, and the operation control of the target virtual item is displayed in gray in the screen of the virtual scene, for example, in fig. 4B, the operation control B1 of the target virtual item that is displayed in gray in the screen of the virtual scene is in an inactive state.
In some embodiments, the operation control of the target virtual item is presented in the screen of the virtual scene by:
presenting the cooling time of the operation control corresponding to the target virtual prop in the picture of the virtual scene; when the cooling time is over, displaying an operation control of the target virtual prop by adopting a target display style; and the target display style is used for representing that the operation control of the target virtual prop is in an activated state.
Here, in practical applications, the control of the virtual prop may be activated in a time cooling manner, and the cooling time of the operation control of different virtual props is different, and generally speaking, the more powerful the virtual prop is, the longer the cooling time of the corresponding operation control is. When the selected operation control is presented in the picture of the virtual scene, the corresponding cooling time can be presented in the picture, and the operation control is activated when the cooling time is over, and the display style of the operation control in the activated state is different from that of the operation control in the inactivated state.
For example, in fig. 4B, an operation control B1 of a favorite target virtual item in an inactive state is displayed in gray scale in a screen of a virtual scene, and a cooling time B2 corresponding to the operation control B1 is presented, when the countdown of the cooling time B2 is finished to 0, a display style of the operation control B1 changes, for example, the operation control C in an active state is displayed by using the target display style shown in fig. 4C, fig. 4C is a screen display diagram provided in the embodiment of the present application, and in fig. 4C, the operation control C in an active state is displayed by using a highlighted display style.
In some embodiments, after the cooling time of the operation control corresponding to the target virtual item is presented, the attack achievement obtained by the virtual object attacking the target object can also be presented; when the attack score reaches the score threshold, the cooling time is shortened.
Here, the cooling time can be accelerated by attacking the enemy, and the better the attainment of the attack, the shorter the cooling time, for example, the cooling time of the operation control for activating the target virtual item is 60 seconds, and the cooling time after the player kills the enemy is shortened to 30 seconds. The attack score can be used for indicating the number of the virtual object attacking the target object or indicating the integral obtained by the virtual object attacking the target object or the resource value of the virtual resource.
Step 102: and responding to the triggering operation aiming at the operation control, and controlling the virtual object in the virtual scene to project the target virtual prop.
Here, when the user triggers the operation control in the activated state, the terminal controls the virtual object in the virtual scene to project the target virtual prop in response to the trigger operation, where the projection may be to project the target virtual prop for the virtual object, such as the virtual object throws a smoke bomb with a hand, or the virtual object launches the target virtual prop through the virtual prop, such as the virtual object launches the smoke bomb through a launching gun or a launching cannon.
Referring to fig. 5A, fig. 5A is a schematic view of a display interface provided in the embodiment of the present application, and when a user triggers operation control C in fig. 4C, in fig. 5A, virtual object a1 in a control virtual scene projects one target virtual prop a 2.
Step 103: and when the target virtual prop falls to the first target position, presenting a process that the target virtual prop explodes into at least two sub virtual props and generates virtual substances.
When the amount of the virtual substances reaches a threshold value, the sight line of the virtual objects cannot pass through the area where the virtual substances are located, namely the virtual objects located on one side of the area where the virtual substances are located or in the area where the virtual substances are located, and pictures in the area where the virtual substances are located or on the other side of the area where the virtual substances are located cannot be visually observed.
The target virtual prop is taken as a smoke bomb and the virtual substance is taken as smoke as an example for explanation, the visibility of the virtual object to a smoke area can be reduced by the smoke generated by the explosion of the smoke bomb, when the concentration of the generated smoke does not reach a concentration threshold value, the virtual object can be used for enabling the virtual object to see other objects in the smoke area in a fuzzy mode, and when the concentration of the generated smoke reaches the concentration threshold value, the other objects in the smoke area cannot be seen completely, namely the other objects in the smoke area are in a state of being shielded by the smoke.
In some embodiments, the terminal may present the process of the target virtual prop bursting into at least two sub-virtual props and generating the virtual substance by:
presenting a process that the target virtual prop explodes at the first target position into at least two sub virtual props and the at least two sub virtual props randomly move along different directions under the action of force generated by the explosion; and when the target virtual prop explodes at the first target position, presenting a process that virtual substances generated by explosion and the virtual substances spread to the surrounding space by taking the first target position as the center.
Here, the target virtual prop explodes after falling to the ground, releases huge energy and explodes into a plurality of sub virtual props when the target virtual prop explodes at the ground, and the sub virtual props randomly fly around under the action of the released energy; simultaneously, the virtual stage property of target can produce a large amount of virtual materials when the landing point explodes, and virtual materials from the place of falling diffuses to space all around, and wherein, diffusion direction, diffusion area and diffusion shape can be controlled at random or according to predetermined control logic, and optionally, according to the shape of different diffusion regions, also can be approximately into polyhedrons such as spheroid or cuboid, cube with diffusion region scope to reduce the visibility of virtual material diffusion region.
For example, for a target virtual prop, the smoke bomb generates smoke when falling to the ground and exploding, the smoke diffuses to the surrounding space by taking the floor point as the center, and the smoke area can be a sphere, a cuboid or a cube and the like by taking the floor point as the center; and meanwhile, a plurality of fragment smoke bombs are generated during explosion, and the fragment smoke bombs randomly fly to various places under the action of energy released by explosion.
Referring to fig. 5B, fig. 5B is a schematic view of a display interface provided in this embodiment of the present application, when a virtual object a1 in fig. 5A is controlled to project a target virtual prop a2, the target virtual prop a2 lands to a first target position B0 in fig. 5B, and explodes at the first target position B0, and during explosion, a plurality of sub virtual props B1 are generated by explosion, and each sub virtual prop B1 randomly flies and moves in various directions; meanwhile, upon explosion, a large amount of virtual matter is generated, and the virtual matter spreads toward the surrounding space with the first target position B0 as the center, forming a target area capable of reducing visibility.
In some embodiments, the terminal may also present the process of the target virtual item exploding into at least two sub-virtual items and generating a virtual substance by:
presenting a process that the target virtual prop explodes at a first target position into at least two sub virtual props, and the at least two sub virtual props move to corresponding second target positions along a target track under the action of force generated by the explosion; and when the target virtual prop explodes at the first target position, presenting a process that virtual substances generated by explosion and the virtual substances spread to the surrounding space by taking the first target position as the center.
In some embodiments, the target trajectory may be determined by: in response to a selection operation for at least two blast locations in a target area centered on a first target location, determining the selected blast location as a second target location; and determining at least two target tracks taking the first target position as a starting point and each second target position as a landing point.
Here, when the target virtual prop explodes at the landing point, the generated virtual substance may diffuse to the surrounding space with the landing point as the center, and several action points are selected as the center points of the explosion of the sub virtual props in the surrounding area with the first target position as the center.
Referring to fig. 5C, fig. 5C is a schematic view of a display interface provided in the embodiment of the present application, in fig. 5C, a target virtual prop explodes at a first target position C1, a plurality of second target positions C2 are selected in a surrounding area centered on the first target position C1, and a plurality of sub virtual props generated by the target virtual prop explosion move along a certain movement trajectory with the first target position C1 as a starting point and the second target positions C2 as a landing point, and fall to the second target position C2 to explode.
Step 104: and when the child virtual prop falls to the second target position, presenting the process that the child virtual prop explodes to generate the virtual substance.
In some embodiments, the process of a child virtual prop bursting to produce a virtual substance may be presented by: and presenting a process that the child virtual prop bursts to generate virtual substances and the virtual substances spread to the surrounding space by taking the second target position as the center.
Here, when the sub virtual prop falls to the second target position, a secondary explosion occurs, and virtual materials generated by the explosion diffuse into the surrounding space with the corresponding secondary explosion point as the center.
Referring to fig. 5D, fig. 5D is a schematic view of a display interface provided in the embodiment of the present application, and in fig. 5D, the sub virtual props explode at second target positions D1 to D3 and spread to the surrounding space with the explosion point as the center.
Through the mode, the virtual material generated by the first explosion of the target virtual prop and the virtual material generated by the second explosion of the plurality of sub virtual props are diffused to the surrounding space, so that the visibility in a very large area is greatly reduced, the sight line of the virtual object cannot penetrate through the area where the virtual material is located, namely the virtual object located on one side of the area where the virtual material is located or in the area where the virtual material is located, and the picture of the area where the virtual material is located or the picture of the other side of the area where the virtual material is located cannot be visually observed. For example, referring to fig. 5E, fig. 5E is a schematic view of a display interface provided in the embodiment of the present application, in fig. 5E, for a target virtual prop, which is a smoke bomb, smoke is generated when the smoke bomb explodes to the ground, the smoke diffuses to the surrounding space with the ground point as the center, and a plurality of fragment smoke bombs are generated during explosion, and these fragment smoke bombs explode at a secondary explosion point to generate a large amount of smoke, and finally a very large smoke area is formed, and a virtual object in the smoke area is in a state of being blocked by a virtual substance.
In some embodiments, after the terminal presents the process of the child virtual prop bursting to produce the virtual substance, the target object is also displayed by:
determining a connection line between the position of the virtual object and the position of a target object, wherein the target object is an attack object of the virtual object; when the connecting line passes through the area where the virtual substance is located, displaying the target object in a target display mode; and the target display style is used for improving the visibility of the virtual object aiming at the target object.
Here, the target object is an object attacked by the virtual object, that is, the target object and the virtual object are in an adversarial relationship, and in actual implementation, the terminal may obtain a spatial coordinate of the currently controlled virtual object in the virtual scene and a spatial coordinate of the target object attacked by the virtual object in the virtual scene, determine the spatial coordinate of the virtual object in the virtual scene as a position of the virtual object, and determine the spatial coordinate of the target object in the virtual scene as the position of the target virtual object. The area where the virtual matter is located is an area which spreads from an explosion point to the surrounding space, such as a smoke area, when a connecting line between the position of the virtual object and the position of the target object passes through the smoke area, the virtual object and the target object are represented to be blocked by the smoke area, under the condition, the target object is displayed in a target display mode which is used for improving the visibility of the virtual object to the target object, even if the virtual object and the target object are blocked by the smoke area, the target object can still be seen through perspective by using the virtual object of the target virtual prop, but the target object which does not use the virtual prop cannot see the virtual object located in the area where the virtual matter is located, and a favorable warplane is created for a team.
In practical applications, if at least one of the virtual object and the target object is located in the area where the virtual substance is located, or both the virtual object and the target object are located at two sides of the area where the virtual substance is located, it can be determined that a connecting line between the position of the virtual object and the position of the target object passes through the area where the virtual substance is located.
In some embodiments, after determining the connection line between the position of the virtual object and the position of the target object, the terminal may further determine that the connection line passes through the area where the virtual substance is located by:
acquiring the central position and the radius of the area plane where the virtual substance is located; determining the position relation between the connecting line and the area where the virtual substance is located based on the central position and the radius; and when the position relation is an intersecting relation, determining that the connecting line passes through the area where the virtual substance is located.
The method comprises the steps of taking an explosion center as a center, taking a preset distance as a sphere of a radius, obtaining a center position which is positioned on the same plane with a virtual object and a target object, obtaining a distance from the center position to a connecting line between the position of the virtual object and the position of the target object, determining that the position relation between the position of the virtual object and the position of the target object is an intersecting relation when the distance is smaller than the radius, determining that the position relation is a tangent relation when the distance is equal to the radius, determining that the position relation is a separating relation when the distance is larger than the radius, and determining that the connecting line between the position of the virtual object and the position of the target object does not pass through the area where the virtual object is positioned.
In practical application, the time length of the generated virtual substance can be obtained in real time, the radius of the plane of the area where the virtual substance is located can be obtained in real time according to the time length, the position relation between the connecting line between the position of the virtual object and the position of the target object and the area where the virtual substance is located can be determined in real time according to the obtained radius, and whether the connecting line between the position of the virtual object and the position of the target object passes through the area where the virtual substance is located or not is judged. This is because the range of the area where the virtual substance is located changes with time, for example, after the smoke bomb explodes, the smoke area gradually expands from the explosion point to the smoke area formed by gradually diffusing all around, and gradually dissipates to finally disappear after the smoke area is expanded to the maximum, so that the radius of the smoke area can be increased and then reduced during the process from generation to message of the smoke area.
In some embodiments, after the process of generating the virtual substance by cracking the sub virtual prop is presented, when the target object and the virtual object are both located in the area where the virtual substance is located, the target object and the virtual object are displayed in a distinguishing manner, so that visibility and recognition of the virtual object for the target object and the virtual object are improved.
Referring to fig. 5F, fig. 5F is a schematic view of a display interface provided in an embodiment of the present application, in fig. 5F, a virtual object F1 and a target object F2 are displayed in a region where a virtual substance is located in different display styles, for example, in smoke, the virtual object and the teammates belonging to the same team are displayed in the same color, and enemies (i.e., target objects) belonging to different teams are displayed in different colors.
In some embodiments, the terminal further presents a process of moving the virtual object at the first rate in the area where the virtual object is located after the process of bursting the child virtual item to generate the virtual substance; when the virtual object is attacked by the target object, presenting a process that the virtual object moves at a second rate; wherein the second rate is greater than the first rate.
Here, when a virtual object using the target virtual item is attacked by a target object in an area where a virtual substance is located, the moving speed of the virtual object can be increased, so that the use of the target virtual item not only shields the view of an enemy and creates a benefit for shielding the virtual object using the target virtual item, but also improves the fighting ability of the virtual object when attacked by the target object.
Next, a description is continued on a method for controlling a virtual item, which is cooperatively implemented by a terminal and a server and applied to a virtual scene of a game, with reference to fig. 6, where fig. 6 is an optional flowchart of the method for controlling a virtual item, and the method will be described with reference to the steps shown in fig. 6.
Step 201: the terminal presents a game start key.
Step 202: and responding to the triggering operation aiming at the starting key, and sending an acquisition request of scene data of the virtual scene to the server.
Here, the acquisition request carries a virtual scene identifier, and is used to acquire scene data of a virtual scene.
Step 203: the server acquires scene data of the virtual scene based on the acquisition request.
Here, the server parses the acquisition request to obtain a virtual scene identifier, and acquires scene data of the virtual scene based on the virtual scene identifier.
Step 204: and the server sends the scene data to the terminal.
Step 205: and the terminal renders pictures based on the received scene data and presents the pictures of the virtual scene.
Step 206: and the terminal presents a selection interface of an operation control comprising at least one virtual prop in the picture of the virtual scene.
Step 207: and the terminal responds to the selection operation aiming at the operation control in the selection interface and sends a data acquisition request to the server.
Step 208: and the server acquires the cooling time of the control of the target virtual prop based on the data request.
Step 209: and the server returns the cooling time of the control of the target virtual item to the terminal.
Step 210: and the terminal gray level presents the operation control of the target virtual prop and presents the cooling time.
Step 211: and when the cooling time is over, the terminal highlights the operation control of the target virtual prop.
Step 212: and the terminal responds to the trigger operation aiming at the operation control and controls the virtual object in the virtual scene to project the target virtual prop.
Step 213: when the target virtual prop falls to the first target position, the terminal presents a process that the target virtual prop explodes into at least two sub virtual props and generates virtual substances.
Step 214: when the child virtual prop falls to the second target position, the terminal presents the process that the child virtual prop bursts to generate virtual substances.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
In the game application of the virtual scene, the smoke bomb explodes after being released to the ground to generate a group of smoke, and virtual objects inside and outside the smoke are not mutually seen, so that the smoke bomb can be used for puzzling enemies. In the related technology, a user operates a terminal once, only one smoke bomb can be controlled to explode once, the smoke area generated by explosion once is limited, the requirement of confusing enemies in a wide area range cannot be met, if the generated smoke covers a large area range, the user needs to operate the terminal for many times to launch multiple smoke bombs, and the human-computer interaction efficiency is low.
Therefore, the embodiment of the present application provides a method for controlling a virtual prop, where a target virtual prop, such as a smoke chip, is projected by controlling a virtual object, and when the target virtual prop lands at a first target position, the target virtual prop explodes into a plurality of sub-virtual props (i.e., a plurality of fragment torpedoes) and generates a virtual substance (i.e., smoke), and when the sub-virtual prop lands at a second target position, the sub-virtual prop also explodes to generate the virtual substance, so that multiple explosions of the virtual prop are realized by one trigger operation, and the multiple virtual substances are generated to form multiple smoke, thereby improving interaction efficiency of interaction operations implemented based on the virtual prop.
Referring to fig. 7, fig. 7 is an optional flowchart of a method for controlling a virtual item provided in the embodiment of the present application, and the steps shown in fig. 7 will be described.
Step 301: and the terminal controls the virtual object to equip the target virtual prop.
Here, the terminal is a terminal corresponding to a first virtual object, a picture presented by the terminal is obtained by observing a virtual scene from a first virtual object viewing angle, the first virtual object is a virtual object in the virtual scene corresponding to the current user account, the virtual scene corresponds to a shooting game scene, and the first virtual object and a second virtual object (i.e., a target object) are killed in the virtual scene.
When a user opens an application program of a virtual scene on the terminal, a selection interface comprising at least one virtual prop is presented in a picture of the virtual scene, the user can select a target virtual prop, namely a smoke chip, from a plurality of virtual props, when the user selects the smoke chip, the terminal responds to the selection operation to present an operation control of the smoke chip, and simultaneously the operation control is switched to a cooling time corresponding to an activation state, and when the operation control is not activated, the operation control is displayed in grey in the picture.
When a game is started, the terminal sends a data acquisition request to the server, the server acquires and returns the cooling time of the control to the terminal based on the data request, the cooling time is displayed at the terminal, and the cooling time is gradually reduced along with the lapse of time. In addition, the time cooling can be accelerated by attacking enemies, and the server updates the cooling time based on the data of the attacking enemies, wherein the cooling time is shorter as the obtained attack performance is better.
Step 302: and judging whether the cooling time is finished or not.
When the cooling time is over, the operation control is activated, and step 303 is executed; otherwise, step 301 is performed.
Step 303: and the terminal highlights the operation control in the picture of the virtual scene.
Here, the display style of the operation control in the activated state is different from the display style of the operation control in the inactivated state.
Step 304: and the terminal responds to the trigger operation aiming at the operation control and controls the virtual object in the virtual scene to project the target virtual prop.
Step 305: and judging whether the target virtual prop falls to the ground or not.
Here, if the smoke chip is determined to be on the ground, if the smoke chip falls to the first target position, step 306 is executed; otherwise, step 304 is performed.
Step 306: and presenting the process that the target virtual prop explodes into at least two sub virtual props at the first target position and generates virtual substances.
Here, when the smoke chip falls to the landing point, a process that the smoke chip explodes to generate smoke and explodes into a plurality of smoke fragments is presented in the picture.
Step 307: and judging whether the child virtual prop falls to the ground or not.
Here, if the smoke fragments are determined to be fallen, for example, when the smoke fragments fall to the second target position, step 308 is executed; otherwise, step 306 is performed.
Step 308: and presenting the process that the sub virtual prop explodes at the second target position to generate the virtual substance.
Here, the second target position may be a position where the smoke fragments randomly land, or may be several action points selected in a surrounding area centered on a landing point (i.e., the first target position) of the target virtual prop.
Step 309: and judging whether the target object enters the area where the virtual substance is located.
Here, the target object is an object attacked by a virtual object using the target virtual item, and when the target object enters the smoke region, step 310 is executed; otherwise, step 308 is performed.
Step 310: the target object is displayed in a perspective style.
Step 311: and judging whether the virtual substance disappears.
When the smoke screen disappears, go to step 312; otherwise, step 310 is performed.
Step 312: and (5) recovering to be normal.
Through the mode, through the interaction between the user and the terminal, the virtual object is controlled to project the smoke chip, when the smoke chip explodes to generate a group of smoke after falling to the ground, a plurality of smoke fragment grenades are generated to randomly fly to all positions, each smoke fragment explodes to generate secondary smoke after falling to the ground, and a plurality of smoke fragment explosions can form a plurality of groups of smoke, so that the visual field of enemies is completely shielded, and the human-computer interaction efficiency is improved.
Continuing with the exemplary structure of the control device 555 of the virtual prop provided in this embodiment of the present application implemented as a software module, in some embodiments, as shown in fig. 8, fig. 8 is a schematic structural diagram of the control device of the virtual prop provided in this embodiment of the present application, and the software module stored in the control device 555 of the virtual prop in the memory 550 may include:
a first presentation module 5551, configured to present an operation control of the target virtual item in a screen of the virtual scene;
a control module 5552, configured to control a virtual object in the virtual scene to project the target virtual prop in response to a trigger operation for the operation control;
the second presentation module 5553 is configured to present a process that the target virtual prop explodes into at least two sub virtual props and generates a virtual substance when the target virtual prop falls to the first target location;
the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located;
a third presenting module 5554, configured to present a process of the child virtual prop bursting to generate a virtual substance when the child virtual prop falls to the second target location.
In some embodiments, the apparatus further comprises a selection module for, prior to the operation control for presenting the target virtual item in the screen of the virtual scene,
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
responding to the selection operation of an operation control in the selection interface, and presenting indication information of a virtual item corresponding to the selected operation control, wherein the indication information is used for indicating the function of the virtual item;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
In some embodiments, the first presentation module is further configured to present, in a screen of a virtual scene, a cooling time of an operation control corresponding to the target virtual item;
when the cooling time is over, displaying an operation control of the target virtual prop by adopting a target display style;
and the target display style is used for representing that the operation control of the target virtual prop is in an activated state.
In some embodiments, the first presentation module is further configured to present an attack achievement obtained by the virtual object attacking the target object;
shortening the cooling time when the attack performance reaches a performance threshold.
In some embodiments, the second presentation module is further configured to present a process in which the target virtual prop explodes into at least two sub virtual props at the first target location, and the at least two sub virtual props randomly move in different directions under the force generated by the explosion;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
In some embodiments, the second presentation module is further configured to present a process in which the target virtual prop explodes into at least two sub virtual props at the first target location, and the at least two sub virtual props move to corresponding second target locations along a target trajectory under a force generated by the explosion;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
In some embodiments, the apparatus further comprises a trajectory determination module for determining a selected blast location as a second target location in response to a selection operation for at least two blast locations in the target area centered on the first target location;
and determining at least two target tracks which take the first target position as a starting point and each second target position as a landing point.
In some embodiments, the third presenting module is further configured to present a process in which the child virtual prop explodes to generate a virtual substance, and the virtual substance spreads to the surrounding space around the second target position.
In some embodiments, the apparatus further comprises a fourth presentation module for determining a connection between the location of the virtual object and a location of a target object that is an attack object of the virtual object after the process of presenting the burst of the child virtual prop to produce a virtual substance;
when the connecting line passes through the area of the virtual substance, displaying the target object in a target display mode;
the target display style is used for improving the visibility of the virtual object aiming at the target object.
In some embodiments, the apparatus further includes a position relation determining module, configured to, after determining a connection line between the position of the virtual object and the position of the target object, obtain a center position and a radius of an area plane where the virtual substance is located;
determining the position relation between the connecting line and the area where the virtual substance is located based on the central position and the radius;
when the position relation is an intersection relation, determining that the connecting line passes through the area where the virtual substance is located.
In some embodiments, the fourth rendering module is further configured to display a target object and the virtual object differently when the target object and the virtual object are both located in the area where the virtual substance is located;
wherein the target object is an attack object of the virtual object.
In some embodiments, the apparatus further comprises a fifth presentation module, configured to present, after the process of presenting the burst of the child virtual item to create a virtual substance, a process of moving the virtual object at a first rate in an area where the virtual substance is located;
when the virtual object is attacked by the target object, presenting a process that the virtual object moves at a second rate;
wherein the target object is an attack object of the virtual object, and the second rate is greater than the first rate.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the control method of the virtual prop in the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium storing executable instructions, wherein the executable instructions are stored, and when being executed by a processor, the executable instructions cause the processor to execute the control method of the virtual prop provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the following technical effects are at least achieved through the embodiments of the present application:
1) the method has the advantages that double explosion of the virtual props is realized through one-time triggering aiming at the operation control, when the virtual props explode to generate virtual substances, a plurality of sub virtual props generated by the explosion of the virtual props also explode to generate the virtual substances, and compared with a mode that only the virtual props generate the virtual substances, the method greatly improves the amount of the generated virtual substances, increases the coverage range of the virtual substances, improves the interaction efficiency of interactive operation implemented based on the virtual props, and improves the interactive experience of a user in the interactive operation in a virtual scene based on the virtual props;
2) the target object can still be seen through perspective by using the virtual object of the target virtual prop, but the virtual object in the area where the virtual substance is located cannot be seen by using the target object without using the virtual prop, so that a favorable warplane is created for a warfare team;
3) when the virtual object using the target virtual prop is attacked by the target object in the area where the virtual substance is located, the moving speed of the virtual object can be accelerated, so that the target virtual prop can shield the sight of enemies, shield the virtual object using the target virtual prop to create a beneficial warfare machine, and improve the fighting capacity of the virtual object when the virtual object is attacked by the target object.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (15)

1. A control method of a virtual prop is characterized by comprising the following steps:
presenting an operation control of the target virtual item in a picture of the virtual scene;
in response to the triggering operation aiming at the operation control, controlling a virtual object in the virtual scene to project the target virtual prop;
when the target virtual prop falls to a first target position, presenting a process that the target virtual prop explodes into at least two sub virtual props and generates virtual substances;
the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located;
and when the child virtual prop falls to a second target position, presenting the process that the child virtual prop explodes to generate a virtual substance.
2. The method of claim 1, wherein prior to presenting the operational control of the target virtual item in the screen of the virtual scene, the method further comprises:
presenting a selection interface of an operation control comprising at least one virtual prop in a picture of a virtual scene;
responding to the selection operation of an operation control in the selection interface, and presenting indication information of a virtual item corresponding to the selected operation control, wherein the indication information is used for indicating the function of the virtual item;
and in response to the determination operation of the selected operation control, determining the selected operation control as the operation control of the target virtual prop.
3. The method of claim 1, wherein the operation control for presenting the target virtual item in the screen of the virtual scene comprises:
presenting the cooling time of the operation control corresponding to the target virtual prop in the picture of the virtual scene;
when the cooling time is over, displaying an operation control of the target virtual prop by adopting a target display style;
and the target display style is used for representing that the operation control of the target virtual prop is in an activated state.
4. The method of claim 3, wherein after the presenting of the cooling time of the operational control corresponding to the target virtual prop, the method further comprises:
presenting the attack score obtained by the virtual object attacking the target object;
shortening the cooling time when the attack performance reaches a performance threshold.
5. The method of claim 1, wherein said process of presenting said target virtual prop to explode into at least two sub-virtual props and create a virtual substance comprises:
presenting a process that the target virtual prop explodes at the first target position into at least two sub virtual props which randomly move along different directions under the action of force generated by the explosion;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
6. The method of claim 1, wherein said process of presenting said target virtual prop to explode into at least two sub-virtual props and create a virtual substance comprises:
presenting a process that the target virtual prop explodes and cracks at the first target position into at least two sub virtual props, and the at least two sub virtual props move to corresponding second target positions along a target track under the action of force generated by the explosion and the crack;
and when the target virtual prop explodes at the first target position, presenting a virtual substance generated by explosion and a process that the virtual substance spreads to the surrounding space by taking the first target position as the center.
7. The method of claim 6, wherein the method further comprises:
determining the selected blast location as a second target location in response to a selection operation for at least two blast locations in a target area centered on the first target location;
and determining at least two target tracks which take the first target position as a starting point and each second target position as a landing point.
8. The method of claim 1, wherein said process of presenting said child virtual prop burst to create a virtual substance comprises:
and presenting a process that the sub virtual prop explodes to generate a virtual substance and the virtual substance spreads to the surrounding space by taking the second target position as the center.
9. The method of claim 1, wherein after the process of presenting the child virtual item to explode to create a virtual substance, the method further comprises:
determining a connection line between the position of the virtual object and the position of a target object, wherein the target object is an attack object of the virtual object;
when the connecting line passes through the area of the virtual substance, displaying the target object in a target display mode;
the target display style is used for improving the visibility of the virtual object aiming at the target object.
10. The method of claim 9, wherein after determining the line between the location of the virtual object and the location of the target object, the method further comprises:
acquiring the central position and the radius of the area plane where the virtual substance is located;
determining the position relation between the connecting line and the area where the virtual substance is located based on the central position and the radius;
when the position relation is an intersection relation, determining that the connecting line passes through the area where the virtual substance is located.
11. The method of claim 1, wherein after the process of presenting the child virtual item to explode to create a virtual substance, the method further comprises:
when the target object and the virtual object are both located in the area where the virtual substance is located, the target object and the virtual object are displayed in a distinguishing mode;
wherein the target object is an attack object of the virtual object.
12. The method of claim 1, wherein after the process of presenting the child virtual item to explode to create a virtual substance, the method further comprises:
presenting a process that the virtual object moves at a first speed in the area where the virtual substance is located;
when the virtual object is attacked by the target object, presenting a process that the virtual object moves at a second rate;
wherein the target object is an attack object of the virtual object, and the second rate is greater than the first rate.
13. An apparatus for controlling a virtual prop, the apparatus comprising:
the first presentation module is used for presenting an operation control of the target virtual item in a picture of a virtual scene;
the control module is used for responding to the triggering operation aiming at the operation control and controlling a virtual object in the virtual scene to project the target virtual prop;
the second presentation module is used for presenting the process that the target virtual prop explodes into at least two sub-virtual props and generates virtual substances when the target virtual prop falls to the first target position;
the virtual material is used for reducing the visibility of a virtual object in the virtual scene to the area where the virtual material is located;
and the third presentation module is used for presenting the process that the sub virtual prop explodes and cracks to generate a virtual substance when the sub virtual prop falls to the second target position.
14. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the control method for the virtual prop according to any one of claims 1 to 12.
15. A computer-readable storage medium, storing executable instructions for implementing the method of controlling a virtual item of any one of claims 1 to 12 when executed by a processor.
CN202010956339.4A 2020-09-11 2020-09-11 Control method, device and equipment of virtual prop and computer readable storage medium Pending CN112057864A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010956339.4A CN112057864A (en) 2020-09-11 2020-09-11 Control method, device and equipment of virtual prop and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010956339.4A CN112057864A (en) 2020-09-11 2020-09-11 Control method, device and equipment of virtual prop and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112057864A true CN112057864A (en) 2020-12-11

Family

ID=73696620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010956339.4A Pending CN112057864A (en) 2020-09-11 2020-09-11 Control method, device and equipment of virtual prop and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112057864A (en)

Similar Documents

Publication Publication Date Title
Ng et al. Informative sound design in video games
WO2021139371A1 (en) Virtual object control method, device, terminal, and storage medium
Manninen et al. The hunt for collaborative war gaming-Case: Battlefield 1942
CN110507990B (en) Interaction method, device, terminal and storage medium based on virtual aircraft
CN110882545A (en) Virtual object control method and device, electronic equipment and storage medium
CN112057864A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112057863A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN111921198A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112402946A (en) Position acquisition method, device, equipment and storage medium in virtual scene
CN112121432A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112156472A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113457151A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112870708A (en) Information display method, device, equipment and storage medium in virtual scene
CN112090069A (en) Information prompting method and device in virtual scene, electronic equipment and storage medium
CN113101667A (en) Virtual object control method, device, equipment and computer readable storage medium
CN113181650A (en) Control method, device, equipment and storage medium for calling object in virtual scene
CN112295228A (en) Virtual object control method and device, electronic equipment and storage medium
CN112295230A (en) Method, device, equipment and storage medium for activating virtual props in virtual scene
CN112121414A (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN112121434A (en) Interaction method and device of special effect prop, electronic equipment and storage medium
CN113633991A (en) Virtual skill control method, device, equipment and computer readable storage medium
CN112870694A (en) Virtual scene picture display method and device, electronic equipment and storage medium
CN112891930A (en) Information display method, device, equipment and storage medium in virtual scene
CN112121433A (en) Method, device and equipment for processing virtual prop and computer readable storage medium
CN112121430A (en) Information display method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035723

Country of ref document: HK