CN117654038A - Interactive processing method and device for virtual scene, electronic equipment and storage medium - Google Patents

Interactive processing method and device for virtual scene, electronic equipment and storage medium Download PDF

Info

Publication number
CN117654038A
CN117654038A CN202211011522.2A CN202211011522A CN117654038A CN 117654038 A CN117654038 A CN 117654038A CN 202211011522 A CN202211011522 A CN 202211011522A CN 117654038 A CN117654038 A CN 117654038A
Authority
CN
China
Prior art keywords
virtual
prop
virtual object
target
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211011522.2A
Other languages
Chinese (zh)
Inventor
詹恒顺
黄冠林
潇如
陈璟瑄
黄佳玮
赵祺
陈嘉恺
陈冠宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211011522.2A priority Critical patent/CN117654038A/en
Publication of CN117654038A publication Critical patent/CN117654038A/en
Pending legal-status Critical Current

Links

Abstract

The application provides an interactive processing method and device for a virtual scene, electronic equipment and a storage medium; the method comprises the following steps: displaying a first virtual object in a virtual scene; responding to triggering operation aiming at the attack prop, and controlling the first virtual object to attack by using the attack prop; responding to the second virtual object in the virtual scene being hit by the attack prop, displaying that the target prop held by the second virtual object falls off from the second virtual object, and controlling the second virtual object to be switched from holding the target prop to holding the shooting prop; in response to a virtual base in a virtual scene being hit by an offending prop, displaying a virtual barrier surrounding the virtual base, wherein the virtual barrier is to mask at least one of: the virtual object in the virtual scene enters the virtual base, and the target prop is stored in the virtual base. By the application method and the application device, application modes of the attack props can be enriched, and therefore efficiency of man-machine interaction is improved.

Description

Interactive processing method and device for virtual scene, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer man-machine interaction technologies, and in particular, to an interaction processing method and apparatus for a virtual scene, an electronic device, and a storage medium.
Background
The man-machine interaction technology of the virtual scene based on the graphic processing hardware can realize diversified interactions among virtual objects controlled by users or artificial intelligence according to actual application requirements, and has wide practical value. For example, in a virtual scene such as a game, a real fight process between virtual objects can be simulated.
Taking shooting games as an example, virtual electromagnetic Pulse (EMP, electroMagnetic Pulse) mines are embodied in many modern background shooting games and play an important role. For example, throwing a virtual EMP mine, the energy particles released when the virtual EMP mine explodes destroy electronic defense devices within a certain range in hostile camping.
However, in the application process of the virtual EMP grenade, since the application mode of the virtual EMP grenade is relatively fixed, the application mode of the virtual EMP grenade mostly surrounds the attack and countermeasure of modern electronic defense equipment in the game, and is often used as a countermeasure of an attacker to the defender electronic defense equipment, the playing space of the attack and defender electronic defense equipment is limited by the array arrangement of the defender, the attack and defender electronic defense equipment can only be used as a rear-mounted application, and cannot be suitable for most scenes in the game, so that the man-machine interaction efficiency is poor.
Disclosure of Invention
The embodiment of the application provides an interactive processing method, an interactive processing device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, which can enrich application modes of attack props, thereby improving the efficiency of man-machine interaction.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an interactive processing method of a virtual scene, which comprises the following steps:
displaying a first virtual object in a virtual scene, wherein the first virtual object holds an attack prop;
responding to triggering operation for the attack prop, and controlling the first virtual object to attack by using the attack prop;
in response to a second virtual object in the virtual scene being hit by the attack prop, displaying that a target prop held by the second virtual object falls from the second virtual object, and controlling the second virtual object to switch from holding the target prop to holding a shooting prop, wherein the target prop is a target of first camping and second camping in a pair of the virtual scene, the target prop is used for determining a winning camping in the pair, the first virtual object belongs to the first camping, and the second virtual object belongs to the second camping;
In response to a virtual base in the virtual scene being hit by the attack prop, displaying a virtual barrier surrounding the virtual base, wherein the virtual barrier is to mask at least one of: and a virtual object in the virtual scene enters the virtual base and is stored into the virtual base to be the target prop.
In the above scheme, the skill chip has a cooling time; before responding to the triggering operation for the skill chip, the method further comprises: when the interval duration between the first moment and the second moment is smaller than the cooling time, shielding response to triggering operation of the skill chip; determining that a trigger operation for the skill chip is to be responded to when a time interval between a first time and a second time is greater than or equal to the cooling time; the first time is the time when the skill chip is applied to the shooting prop for the last time, and the second time is the time when the trigger operation is received.
In the above scheme, the number of the target props dropped by the second virtual object is positively correlated with the injury degree of the second virtual object when the second virtual object is hit by the attack props; the display duration of the virtual barrier is positively correlated with the injury level of the virtual base when hit by the attack prop.
The embodiment of the application provides an interactive processing device for a virtual scene, which comprises:
the display module is used for displaying a first virtual object in the virtual scene, wherein the first virtual object holds an attack prop;
the control module is used for responding to the triggering operation for the attack prop and controlling the first virtual object to attack by using the attack prop;
the display module is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop;
the control module is further configured to control the second virtual object to switch from holding the target prop to holding the shooting prop, where the target prop is a target that is robbed by a first camp and a second camp in a counter of the virtual scene, the target prop is used to determine a winning camp in the counter, the first virtual object belongs to the first camp, and the second virtual object belongs to the second camp;
the display module is further configured to display a virtual barrier surrounding the virtual base in response to the virtual base in the virtual scene being hit by the attack prop, where the virtual barrier is configured to mask at least one of: and a virtual object in the virtual scene enters the virtual base and is stored into the virtual base to be the target prop.
The embodiment of the application provides an interactive processing method of a virtual scene, which comprises the following steps:
displaying a first virtual object in a virtual scene, wherein the first virtual object holds an attack prop;
responding to triggering operation for the attack prop, and controlling the first virtual object to attack by using the attack prop;
and responding to the second virtual object in the virtual scene being hit by the attack prop, displaying that a target prop held by the second virtual object falls from the second virtual object, and controlling the target prop to switch from a first display mode to a second display mode, wherein the second display mode represents that the target prop is in an unclassifiable state.
In the above scheme, the method further comprises: in response to a virtual base in the virtual scene being hit by the attack prop, displaying a virtual barrier surrounding the virtual base, wherein the virtual barrier is to mask at least one of: and a virtual object in the virtual scene enters the virtual base and is stored into the virtual base to be the target prop.
In the above scheme, the attack prop comprises a throwing prop, and the throwing prop is dropped by a virtual flight prop in the virtual scene; before displaying the tossable prop held by the first virtual object, the method further comprises: displaying the virtual flying prop in the virtual scene; responsive to the virtual flying prop being attacked, displaying the tossable prop dropped from the virtual flying prop; the method further includes controlling the first virtual object to pick up the tossable item in response to a pick up trigger operation for the tossable item.
In the above scheme, the attack prop comprises a shooting prop with a skill chip, the original function of the shooting prop is to launch a virtual bullet, and the skill chip is used for replacing the virtual bullet with an energy particle; before responding to the triggering operation for the attack prop, the method further comprises: controlling the first virtual object to acquire the skill chip; in response to a triggering operation for the skill chip, the skill chip is applied in the shooting prop held by the first virtual object.
In the above scheme, the skill chip has a cooling time; before responding to the triggering operation for the skill chip, the method further comprises: when the interval duration between the first moment and the second moment is smaller than the cooling time, shielding response to triggering operation of the skill chip; determining that a trigger operation for the skill chip is to be responded to when a time interval between a first time and a second time is greater than or equal to the cooling time; the first time is the time when the skill chip is applied to the shooting prop for the last time, and the second time is the time when the trigger operation is received.
In the above scheme, the number of the falling props when the virtual flying props are attacked is related to the skills of the first virtual object, wherein when the first virtual object has the skills of increasing the number of the falling props, the number of the falling props when the virtual flying props are attacked is increased compared with the case that the first virtual object does not have the skills of increasing the number of the falling props.
In the above scheme, the method further comprises: and responding to the condition that a third virtual object in the virtual scene is hit by the attack prop and the third virtual object does not hold the target prop, and shielding the third virtual object from using the attack prop within a second set time period, wherein the second set time period is positively correlated with the injury degree of the third virtual object when being hit by the energy particles.
In the above scheme, a plurality of camps including the first camping and the second camping exist in the virtual scene, each of the camps has at least one virtual base in the virtual scene, and the virtual base hit by the attack prop is a virtual base of any one of the plurality of camps; the method further comprises the steps of: and stopping running the virtual scene and displaying prompt information of winning of any camping in response to the number of the target props stored in any virtual base of the camping reaching a number threshold.
In the above scheme, the target props are continuously generated in the virtual scene along with the running progress of the virtual scene, each virtual object in the virtual scene can only hold one target prop at the same time, and the target props stored in each virtual base are carried to the virtual base by the virtual objects included in the corresponding camping.
In the above scheme, when the virtual scene starts to run, the plurality of virtual bases corresponding to the camps are distributed at different positions of the virtual scene, and the virtual object included in each of the camps is born and revived in the virtual base corresponding to the camps; the method further comprises the steps of: displaying at least one target location in the map control; and in response to a migration trigger operation for a first virtual base, moving the first virtual base to a selected target position in the at least one target position, wherein the first virtual base is a virtual base of the first camp.
In the above scheme, the method further comprises: and responding to the movement triggering operation aiming at the first virtual object, controlling the first virtual object to enter the virtual base of the second camp, and acquiring the target prop stored in the virtual base of the second camp.
The embodiment of the application provides an interactive processing device for a virtual scene, which comprises:
the display module is used for displaying a first virtual object in the virtual scene, wherein the first virtual object holds an attack prop;
the control module is used for responding to the triggering operation for the attack prop and controlling the first virtual object to attack by using the attack prop;
the display module is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop;
the control module is further configured to control the target prop to switch from a first display mode to a second display mode, where the second display mode characterizes the target prop as being in an unclassifiable state.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the interactive processing method of the virtual scene provided by the embodiment of the application when executing the executable instructions stored in the memory.
The embodiment of the application provides a computer readable storage medium, which stores computer executable instructions for realizing the interactive processing method of the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application provides a computer program product, which comprises a computer program or computer executable instructions and is used for realizing the interactive processing method of the virtual scene provided by the embodiment of the application when being executed by a processor.
The embodiment of the application has the following beneficial effects:
when a user controls the first virtual object to hit the second virtual object by using the attack prop, the second virtual object can be switched from holding the target prop to holding the shooting prop, and the target prop held by the second virtual object can drop; when a user controls a first virtual object to hit a virtual base by using an attack prop, a virtual barrier surrounding the virtual base can be generated, so that the virtual base cannot enter and lose at least one of the capability of storing target props, namely, different effects can be generated by the attack prop aiming at different use scenes, and thus, the application modes of the attack prop are enriched, and the efficiency of man-machine interaction is improved.
Drawings
Fig. 1A is an application mode schematic diagram of an interactive processing method of a virtual scene provided in an embodiment of the present application;
fig. 1B is an application mode schematic diagram of an interactive processing method of a virtual scene provided in an embodiment of the present application;
Fig. 2 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present application;
fig. 3 is a flow chart of an interactive processing method of a virtual scene provided in an embodiment of the present application;
fig. 4A to fig. 4C are application scenario diagrams of an interactive processing method of a virtual scenario provided in an embodiment of the present application;
fig. 5A and fig. 5B are schematic flow diagrams of an interactive processing method of a virtual scene according to an embodiment of the present application;
fig. 6 is a flowchart of an interactive processing method of a virtual scene according to an embodiment of the present application;
fig. 7A and fig. 7B are application scenario diagrams of an interactive processing method of a virtual scenario provided in an embodiment of the present application;
fig. 8 is a flowchart of an interactive processing method for a virtual scene according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail with reference to the accompanying drawings, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It will be appreciated that in embodiments of the present application, related data such as user information (e.g., data of user-controlled game characters) may be relevant, and when the embodiments of the present application are applied to specific products or technologies, user permissions or consents may be obtained, and the collection, use and processing of the relevant data may be required to comply with relevant laws and regulations and standards of the relevant country and region.
In the following description, the term "first/second/is referred to merely as distinguishing between similar objects and not as representing a particular ordering of the objects, it being understood that the" first/second/may be interchanged with a particular order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before further describing embodiments of the present application in detail, the terms and expressions that are referred to in the embodiments of the present application are described, and are suitable for the following explanation.
1) In response to: for representing a condition or state upon which an operation is performed, one or more operations performed may be in real-time or with a set delay when the condition or state upon which the operation is dependent is satisfied; without being specifically described, there is no limitation in the execution sequence of the plurality of operations performed.
2) Virtual scene: is the scene that the application displays (or provides) when running on the terminal device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, a virtual scene may include sky, land, sea, etc., the land may include environmental elements of a desert, city, etc., and a user may control a virtual object to move in the virtual scene.
3) Virtual object: the avatars of various people and objects in the virtual scene that can interact with, or movable objects in the virtual scene. The movable object may be a virtual character, a virtual animal, a cartoon character, etc., such as a character, an animal, etc., displayed in a virtual scene. The virtual object may be a virtual avatar in a virtual scene for representing a user. A virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene, occupying a portion of space in the virtual scene.
4) The target prop: each camp in the virtual scene contends for the props, when the number of target props stored in the virtual base of any camp (e.g., camp a) reaches a number threshold (e.g., 10), the game ends, and camp a gets a win in the game of the present game.
5) Virtual base: the virtual scene is used for storing the position of the target prop or a virtual building, each camp has at least one corresponding virtual base in the virtual scene, and in addition, the virtual objects included in each camp are born and revived in the virtual base of the corresponding camp.
6) Attack prop: the throwing prop comprises a throwing prop and a shooting prop with a skill chip applied, wherein the throwing prop comprises a virtual EMP grenade, a virtual EMP grenade and the like, and can immediately explode when falling to the ground, so that energy particles are released into a virtual scene; the firing prop may be a virtual firearm whose original function is to fire a virtual bullet that would be replaced with an energetic particle after the skill chip is applied.
7) Client side: applications running in the terminal device for providing various services, such as a video play client, a game client, and the like.
8) Scene data: the feature data representing the virtual scene may be, for example, an area of a building area in the virtual scene, a building style in which the virtual scene is currently located, and the like; and may also include the location of the virtual building in the virtual scene, the footprint of the virtual building, etc.
The embodiment of the application provides an interactive processing method, an interactive processing device, electronic equipment, a computer readable storage medium and a computer program product for a virtual scene, which can enrich application modes of attack props, thereby improving the efficiency of man-machine interaction. In order to facilitate easier understanding of the method for processing the interaction of the virtual scene provided by the embodiment of the present application, first, an exemplary implementation scenario of the method for processing the interaction of the virtual scene provided by the embodiment of the present application is described, where the virtual scene in the method for processing the interaction of the virtual scene provided by the embodiment of the present application may be output based on the terminal device completely, or may be output based on the cooperation of the terminal device and the server.
In some embodiments, the virtual scene may be an environment for virtual objects (e.g., game characters) to interact, for example, for game characters to fight in the virtual scene, and by controlling actions of the game characters, both parties may interact in the virtual scene, so that the user can relax life pressure in the course of the game.
In an implementation scenario, referring to fig. 1A, fig. 1A is a schematic application mode diagram of an interactive processing method of a virtual scenario provided in the embodiment of the present application, which is suitable for some application modes that can complete relevant data computation of the virtual scenario 100 completely depending on the graphics processing hardware computing capability of the terminal device 400, for example, a game in a stand-alone/offline mode, and output of the virtual scenario is completed through various different types of terminal devices 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
By way of example, the types of graphics processing hardware include central processing units (CPU, central Processing Unit) and graphics processors (GPU, graphics Processing Unit).
When forming the visual perception of the virtual scene 100, the terminal device 400 calculates the data required for display through the graphic computing hardware, and completes loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception for the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is presented on the display screen of the smart phone, or a video frame realizing the three-dimensional display effect is projected on the lens of the augmented reality/virtual reality glasses; in addition, to enrich the perceived effect, the terminal device 400 may also form one or more of auditory perception, tactile perception, motion perception and gustatory perception by means of different hardware.
As an example, the terminal device 400 has a client 410 (e.g., a stand-alone game application) running thereon, and outputs a virtual scene including role playing during the running of the client 410, where the virtual scene may be an environment for interaction of a game character, such as a plain, a street, a valley, etc. for the game character to fight against; taking the example of displaying the virtual scene 100 from the first person perspective, the first virtual object 101 is displayed in the virtual scene 100, where the first virtual object 101 may be a game character controlled by a user, that is, the first virtual object 101 is controlled by a real user, and will move in the virtual scene 100 in response to an operation of the real user on a controller (such as a touch screen, a voice control switch, a keyboard, a mouse, and a joystick, etc.), for example, when the real user moves the joystick to the right, the first virtual object 101 will move to the right in the virtual scene 100, and may also remain stationary, jump, and the first control virtual object 101 performs a shooting operation, etc.
For example, taking an attack prop as a tossable prop, a first virtual object 101 (e.g., controlled by a game character a of a user 1) and a tossable prop 102 (e.g., EMP grenade) held by the first virtual object 101 are displayed in the virtual scene 100, wherein the tossable prop 102 releases energy particles into the virtual scene 100 when a landing explosion occurs, and the client 410 displays that a target prop 104 (e.g., a ball) held by a second virtual object 103 in the virtual scene 100 falls from the second virtual object 103 in response to the second virtual object 103 (e.g., controlled by a game character B of a user 2) being hit by the energy particles; the client 410 displays a virtual barrier 106 surrounding the virtual base 105 in response to the virtual base 105 in the virtual scene 100 being hit by energy particles, wherein the virtual barrier 106 may be used to mask at least one of: the virtual objects in the virtual scene 100 enter the virtual base 105 and are stored in the virtual base 105 into the target prop 104, so that the attack props have different effects according to different use scenes, the application modes of the attack props are enriched, and the efficiency of man-machine interaction is improved.
In another implementation scenario, referring to fig. 1B, fig. 1B is a schematic application mode diagram of an interaction processing method of a virtual scenario provided in an embodiment of the present application, applied to a terminal device 400 and a server 200, and adapted to an application mode that completes virtual scenario calculation depending on a computing capability of the server 200 and outputs the virtual scenario at the terminal device 400.
Taking the example of forming the visual perception of the virtual scene 100, the server 200 performs calculation of virtual scene related display data (such as scene data) and sends the calculated display data to the terminal device 400 through the network 300, the terminal device 400 finishes loading, analyzing and rendering the calculated display data depending on the graphic calculation hardware, and outputs the virtual scene depending on the graphic output hardware to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame for realizing a three-dimensional display effect can be projected on a lens of an augmented reality/virtual reality glasses; as regards the perception of the form of the virtual scene, it is understood that the auditory perception may be formed by means of the corresponding hardware output of the terminal device 400, for example using a microphone, the tactile perception may be formed using a vibrator, etc.
As an example, where the terminal device 400 has a client 410 (e.g., a web-based game application) running thereon, and performs game interaction with other users through the connection server 200 (e.g., a game server), the terminal device 400 outputs the virtual scene 100 of the client 410, and, taking the first person perspective to display the virtual scene 100 as an example, displays the first virtual object 101 in the virtual scene 100, where the first virtual object 101 may be a game character controlled by the user, i.e., the first virtual object 101 is controlled by the real user, and will move in the virtual scene 100 in response to an operation of the real user with respect to a controller (e.g., a touch screen, a voice-controlled switch, a keyboard, a mouse, a joystick, etc.), for example, when the real user moves the joystick to the right, the first virtual object 101 will move to the right in the virtual scene 100, and may also remain stationary in place, jump, control the first virtual object 101 to perform a shooting operation, etc.
For example, taking an attack prop as a shooting prop with a skill chip applied, a first virtual object 101 (for example, a game character a controlled by a user 1) and a shooting prop 107 (for example, a virtual firearm) with the first virtual object 101 applied with the skill chip are displayed in the virtual scene 100, wherein the client 410, when receiving a trigger operation of the user 1 on the shooting control, will control the first virtual object 101 to release energy particles into the virtual scene 100 by using the shooting prop 107 (i.e., convert a virtual bullet emitted by the shooting prop 107 into energy particles). The client 410, in response to the second virtual object 103 (e.g., game character B controlled by user 2) in the virtual scene 100 being hit by the energy particle, displays that the target prop 104 (e.g., a ball) held by the second virtual object 103 falls from the second virtual object 103; the client 410 displays a virtual barrier 106 surrounding the virtual base 105 in response to the virtual base 105 in the virtual scene 100 being hit by energy particles, wherein the virtual barrier 106 may be used to mask at least one of: the virtual objects in the virtual scene 100 enter the virtual base 105 and are stored in the virtual base 105 into the target prop 104, so that the attack props have different effects according to different use scenes, the application modes of the attack props are enriched, and the efficiency of man-machine interaction is improved.
It should be noted that the skill chip may be active for a period of time, for example, within 30 seconds after the skill chip is applied to the firing prop 107, the virtual bullet emitted from the firing prop 107 may be converted into an energy particle, and after 30 seconds, the firing prop 107 will resume its original function. Of course, the skill chip may also be active, for example, after application of the skill chip in firing prop 107, a virtual bullet from 10 subsequent shots may be converted to energy particles, after which 10 shots of firing prop 107 will resume their original function.
In some embodiments, the terminal device 400 may further implement the method for processing the virtual scene provided in the embodiments of the present application by running a computer program, for example, the computer program may be a native program or a software module in an operating system; may be a local (Native) application (APP, APPlication), i.e. a program that needs to be installed in an operating system to run, such as a shooting game APP (i.e. client 410 described above); the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also a game applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
Taking a computer program as an example of an application program, in actual implementation, the terminal device 400 installs and runs an application program supporting a virtual scene. The application may be any one of a First person shooter game (FPS), a third person shooter game, a virtual reality application, a three-dimensional map program, or a multiplayer gunfight survival game. The user uses the terminal device 400 to operate a virtual object located in a virtual scene to perform activities including, but not limited to: at least one of body posture adjustment, crawling, walking, running, riding, jumping, driving, picking up, shooting, attacking, throwing, building a virtual building. Illustratively, the virtual object may be a virtual character, such as an emulated persona or a cartoon persona, or the like.
In other embodiments, the embodiments of the present application may also be implemented by means of Cloud Technology (Cloud Technology), which refers to a hosting Technology that unifies serial resources such as hardware, software, networks, etc. in a wide area network or a local area network, so as to implement calculation, storage, processing, and sharing of data.
The cloud technology is a generic term of network technology, information technology, integration technology, management platform technology, application technology and the like based on cloud computing business model application, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical network systems require a large amount of computing and storage resources.
By way of example, the server 200 in fig. 1B may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (CDN, content Delivery Network), and basic cloud computing services such as big data and artificial intelligence platforms. The terminal device 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a vehicle-mounted terminal, etc. The terminal device 400 and the server 200 may be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present application.
The following continues to describe the structure of the electronic device provided in the embodiment of the present application. Taking an electronic device as an example of a terminal device, referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device 500 provided in an embodiment of the present application, and the electronic device 500 shown in fig. 2 includes: at least one processor 510, a memory 550, at least one network interface 520, and a user interface 530. The various components in electronic device 500 are coupled together by bus system 540. It is appreciated that the bus system 540 is used to enable connected communications between these components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to the data bus. The various buses are labeled as bus system 540 in fig. 2 for clarity of illustration.
The processor 510 may be an integrated circuit chip with signal processing capabilities such as a general purpose processor, such as a microprocessor or any conventional processor, or the like, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
The user interface 530 includes one or more output devices 531 that enable presentation of media content, including one or more speakers and/or one or more visual displays. The user interface 530 also includes one or more input devices 532, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard drives, optical drives, and the like. Memory 550 may optionally include one or more storage devices physically located remote from processor 510.
Memory 550 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a random access Memory (RAM, random Access Memory). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 is capable of storing data to support various operations, examples of which include programs, modules and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and handling hardware-based tasks;
network communication module 552 is used to reach other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 include: bluetooth, wireless compatibility authentication (WiFi), and universal serial bus (USB, universal Serial Bus), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating a peripheral device and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
the input processing module 554 is configured to detect one or more user inputs or interactions from one of the one or more input devices 532 and translate the detected inputs or interactions.
In some embodiments, the apparatus provided in the embodiments of the present application may be implemented in a software manner, and fig. 2 shows an interaction processing apparatus 555 of a virtual scene stored in a memory 550, which may be software in the form of a program and a plug-in, and includes the following software modules: the display module 5551, the control module 5552, the application module 5553, the shielding module 5554, the determination module 5555, the stop module 5556, the movement module 5557, and the acquisition module 5558 are logical, and thus may be arbitrarily combined or further split according to the functions implemented. It should be noted that, in fig. 2, all the modules are shown once for convenience of expression, but the implementation that may include only the display module 5551 and the control module 5552 is excluded from the interaction processing device 555 in the virtual scene, and the functions of each module will be described below.
The following specifically describes an interactive processing method of a virtual scene provided in the embodiment of the present application in conjunction with an exemplary application and implementation of a terminal device provided in the embodiment of the present application.
Referring to fig. 3, fig. 3 is a flowchart of an interactive processing method of a virtual scene according to an embodiment of the present application, and will be described with reference to the steps shown in fig. 3.
It should be noted that the method shown in fig. 3 may be executed by various computer programs executed by the terminal device, and is not limited to the client, but may also be an operating system, a software module, a script, an applet, etc. described above, and therefore, the following example of the client should not be considered as limiting the embodiments of the present application. In addition, for convenience of description, specific distinction is not made between a terminal device and a client on which the terminal device operates hereinafter.
In step 301, a first virtual object is displayed in a virtual scene.
Here, the first virtual object holds an attack prop, where the attack prop may be a tossable prop or a shooting prop to which a skill chip is applied.
In some embodiments, a client supporting a virtual scene is installed on the terminal device (for example, when the virtual scene is a game, the corresponding client may be a shooting game APP), and when the user opens the client installed on the terminal device (for example, the user clicks an icon corresponding to the shooting game APP presented on a user interface of the terminal device) and the terminal device runs the client, a first virtual object (for example, a virtual object a controlled by the user 1) and an attack prop (for example, a shooting prop using a skill chip, for example, a virtual shooter using a skill chip) held by the first virtual object through a holding part (for example, a hand) may be displayed in a virtual scene presented on a man-machine interface of the client device.
Taking an attack prop as an example of a tossable prop, the above-mentioned displaying the first virtual object in the virtual scene and the tossable prop held by the first virtual object may be implemented in the following manner: in response to a tossable item selection operation (e.g., receiving a click operation by a user on a control corresponding to a tossable item displayed in a virtual scene), a first virtual object is displayed in the virtual scene, and a tossable item held by the first virtual object through the grip location (e.g., playing an animation that switches from a shooting item to a tossable item when the first virtual object originally holds the shooting item).
In other embodiments, it may be possible in the man-machine interface of the client to display the virtual scene at a first-person perspective (e.g., playing a virtual object in the game at the user's own perspective); the virtual scene may be displayed with a third person viewing angle (for example, the user follows a virtual object in the game to play the game); the virtual scene can be displayed in a bird's eye view with a large viewing angle; wherein, the above-mentioned different visual angles can be arbitrarily switched.
As an example, the first virtual object may be an object controlled by the current user in the game, although other virtual objects may also be included in the virtual scene, such as virtual objects that may be controlled by other users or by a robot program. The virtual object may be partitioned into any one of a plurality of camps, the camps may be hostile or collaborative, and the camps in the virtual scene may include one or all of the above.
Taking the example of displaying the virtual scene from the first person perspective, displaying the virtual scene in the human-computer interaction interface may include: the field of view area of the first virtual object is determined according to the viewing position and the field angle of the first virtual object in the complete virtual scene, and a part of the virtual scene in the field of view area in the complete virtual scene is presented, namely the displayed virtual scene can be a part of the virtual scene relative to the panoramic virtual scene. Because the first person perspective is the viewing perspective that is most capable of giving the user impact, immersive perception of the user as being immersive during operation can be achieved.
Taking an example of displaying the virtual scene with a bird's eye view and a large viewing angle, displaying the virtual scene in the human-computer interaction interface may include: in response to a zoom operation for the panoramic virtual scene, a portion of the virtual scene corresponding to the zoom operation is presented in the human-machine interaction interface, i.e., the displayed virtual scene may be a portion of the virtual scene relative to the panoramic virtual scene. Therefore, the operability of the user in the operation process can be improved, and the efficiency of man-machine interaction can be improved.
In some embodiments, taking the example of an attack prop being a tossable prop, where the tossable prop may be a virtual flying prop (e.g., a virtual drone) dropped in a virtual scene, the client may further perform the following processing before displaying the tossable prop held by the first virtual object: displaying virtual flying props in the virtual scene (the virtual flying props may be in a stationary state or in a moving state); in response to the virtual flying prop being attacked (e.g., knocked down or partially destroyed), displaying a tossable prop dropped from the virtual flying prop; in response to a pick-up trigger operation for the tossable item, the first virtual object is controlled to pick up the tossable item.
For example, taking a virtual flight prop as an example of a virtual unmanned aerial vehicle, referring to fig. 4A, fig. 4A is a schematic view of an application scenario of an interactive processing method of a virtual scenario provided in an embodiment of the present application, as shown in fig. 4A, a first virtual object 401 (for example, a virtual object a controlled by a user 1) and a shooting prop 402 (for example, a virtual firearm) held by the first virtual object 401 are displayed in the virtual scenario 400. In addition, a virtual unmanned plane 403 in a moving state is also displayed in the virtual scene 400. The client displays a tossable prop 404 (e.g., virtual EMP grenade) dropped from the virtual drone 403 in the virtual scene 400 in response to the virtual drone 403 being knocked down (e.g., user 1 controlling the first virtual object 401 to knock down the virtual drone 403 with the shooting prop 402); the client then controls first virtual object 401 to pick up tossable item 404 dropped from virtual drone 403 in response to a pick up trigger operation for tossable item 404 (e.g., receiving a user click operation on the "Q" key on the keyboard).
In some embodiments, taking an attack prop as an example of a shooting prop to which a skill chip is applied, wherein the original function of the shooting prop is to launch a virtual bullet, the skill chip is used to replace the virtual bullet launched by the shooting prop with an energy particle, the client may further perform the following processing before responding to a shooting trigger operation for the shooting prop: controlling a first virtual object to acquire a skill chip; in response to a triggering operation for the skill chip, the skill chip is applied in a shooting prop held by the first virtual object.
In other embodiments, the skills chip may be cool-time (e.g., 60 seconds, i.e., after using the skills chip, it takes to wait 60 seconds before the skills chip can be used again), the client may also perform the following processing before responding to the triggering operation for the skills chip: when the interval duration of the first time and the second time is smaller than the cooling time, shielding response to the triggering operation of the skill chip (namely, when the interval duration is smaller than the cooling time, the skill chip is in a locking state, and the client refuses to respond to the triggering operation of the skill chip), and displaying prompt information, wherein the prompt information is used for prompting that the skill chip is still in a cooling state and cannot be used currently; determining that a trigger operation for the skill chip is to be responded when the interval time between the first moment and the second moment is greater than or equal to the cooling time (namely, when the interval time is greater than or equal to the cooling time, the skill chip is in an unlocked state, and the client applies the skill chip to shooting props held by the first virtual object in response to the trigger operation for the skill chip so as to convert virtual bullets emitted by the shooting props into energy particles); the first moment is the moment when the skill chip is applied to the shooting prop for the last time, and the second moment is the moment when the trigger operation is received.
It should be noted that after the skill chip is applied, all the virtual bullets that are subsequently launched may be replaced with energy particles (i.e., the shooting prop has a function of permanently launching energy particles after the skill chip is applied), and of course, the shooting prop may restore the original function (i.e., re-launch the virtual bullets) after a set number of shots (e.g., 1 or more), where the skill chip may be used an unlimited number of times (e.g., the skill chip may be reused in a game), and of course, the skill chip may be used only a limited number of times (e.g., only 5 times at most in a game).
By way of example, the skill chip may be a virtual flying prop (e.g., a virtual drone) in a virtual scene that is dropped, and controlling the first virtual object to acquire the skill chip may be accomplished by: displaying virtual flight props in the virtual scene; in response to the virtual flying prop being attacked (e.g., knocked down or partially destroyed), displaying a skill chip dropped from the virtual flying prop; and controlling the first virtual object to pick up the skill chip in response to the pick-up trigger operation for the skill chip.
It should be noted that, the number of props (including throwing props, skill chips, etc.) that are dropped when the virtual flying props are attacked is related to the skills that the first virtual object has, wherein when the first virtual object has the skills to increase the number of props that are dropped, the number of props that are dropped when the virtual flying props are attacked is increased compared to when the first virtual object does not have the skills to increase the number of props that are dropped.
For example, taking the first virtual object as the virtual object a controlled by the user 1, when the virtual object a is upgraded, the user 1 may choose to increase the number of falling props, so when the virtual flying props are attacked by the virtual object a, the number of falling props is increased compared with other virtual objects (for example, the virtual object B controlled by the user 2) without the skill of increasing the number of falling props, for example, when the virtual flying props are knocked down by the virtual object a, 2 props are dropped, and when the virtual flying props are knocked down by the virtual object B, only 1 prop is dropped, so that the efficiency of man-machine interaction can be further improved.
In addition, it should be further noted that, in addition to the fact that the skill chip may be the virtual flying prop falling, the skill chip may also be configured by default in a weapon library of the virtual scene, and of course, may also be replaced by virtual resources (for example, points are obtained by clicking and killing).
In step 302, a first virtual object is controlled to attack using an attack prop in response to a triggering operation for the attack prop.
In some embodiments, when the attacking prop is a tossable prop, step 302 shown in fig. 3 may be implemented by steps 3021 and 3022 shown in fig. 5A, as will be described in connection with the steps shown in fig. 5A.
In step 3021, a throwing of a tossable item in a first direction by a first virtual object is controlled in response to a throwing trigger operation for the tossable item.
Here, the first direction is a direction selected for a throwing trigger operation.
In some embodiments, taking the tossable prop as a virtual EMP grenade as an example, a tossing control for the virtual EMP grenade is displayed in the virtual scene, and when the client receives a click operation of the tossing control by the user, the first virtual object is controlled to tosse the virtual EMP grenade in a first direction, so that the virtual EMP grenade flies in the first direction.
In step 3022, in response to the tossable prop colliding with an obstacle in the virtual scene, the tossable prop is controlled to center on the point of collision and release energetic particles in the virtual scene in an outward radiating manner.
In some embodiments, taking the tossable prop as a virtual EMP grenade as an example, a detection parabola extending along a first direction may be generated from the virtual EMP grenade, and the virtual EMP grenade is controlled to fly along the detection parabola; when detecting that a parabola collides with an obstacle in a virtual scene (may be a virtual object, a wall, a ground, etc. in the virtual scene), the virtual EMP mine is controlled to explode centered on the collision point (e.g., landing point) of the parabola with the obstacle, and to release an energy instance in the virtual scene in an outward radiation manner.
In other embodiments, when the attacking prop is a shooting prop to which the skill chip is applied, the client may implement step 302 described above by: in response to a firing trigger operation for the firing prop, the first virtual object is controlled to release energy particles using the firing prop in a second direction in the virtual scene, wherein the second direction is a direction selected by the firing trigger operation.
Taking a shooting prop as a virtual firearm for example, wherein the virtual firearm is provided with a skill chip, a shooting control for the virtual firearm is displayed in a virtual scene, and when a client receives click operation of a user for the shooting control, a first virtual object is controlled to shoot towards a second direction in the virtual scene by using the virtual firearm, so that the virtual firearm releases energy particles towards the second direction in the virtual scene.
In step 303, in response to the second virtual object in the virtual scene being hit by the attacking prop, the target prop held by the second virtual object is displayed to drop from the second virtual object, and the second virtual object is controlled to switch from holding the target prop to holding the shooting prop.
Here, hit by an offending prop may refer to hit by energy particles released by the offending prop, the target prop being a target of a first camp and a second camp robbing in a game play, the target prop being used to determine a winning camp in the play, the first virtual object belonging to the first camp, the second virtual object belonging to the second camp.
It should be noted that, in the embodiment of the present application, the second virtual object is a generic term of a virtual object hit by an attack prop and having a target prop, that is, a virtual object hit by an energy particle released by the attack prop in a virtual scene and having the target prop is collectively referred to as a second virtual object, rather than referring to any virtual object in the virtual scene, for example, it is assumed that a virtual object B and a virtual object C in the virtual scene both have the target prop and are hit by an energy particle released by the attack prop, and then the virtual object B and the virtual object C are both referred to as a second virtual object.
In some embodiments, when the second virtual object is hit by the first virtual object using the attack prop, the target prop held by the second virtual object may fall from the second virtual object, and the second virtual object is controlled to automatically restore to a state of holding the shooting prop (i.e. after the target prop held by the second virtual object falls, the second virtual object may automatically restore to a state of holding the shooting prop, such as a gun holding state).
In some embodiments, taking an example of an attacking prop as a tossable prop, since the tossable prop is in range influence (i.e., the virtual object is affected as long as it is within a certain range), a second virtual object can be considered to be hit by energy particles released when the tossable prop explodes when the second virtual object is in an area centered on the landing point of the tossable prop (corresponding to the above-mentioned impact point, e.g., the impact point of the tossable prop with the ground), e.g., a circular area centered on the landing point of the tossable prop and having a radius of 30 meters.
For example, taking an attack prop as a virtual EMP mine, when the virtual EMP mine lands, energy particles are released in a circular area with a radius of 30 meters by taking a landing point as a circle center, and at this time, if a distance between a second virtual object (for example, a virtual object B controlled by a user 2) in the virtual scene and the landing point of the virtual EMP mine is less than 30 meters, the virtual object B can be considered to be hit by the energy particles released when the virtual EMP mine explodes.
In other embodiments, the client may further perform the following processing in response to a second virtual object in the virtual scene being hit by an energy particle: the second virtual object is shielded to pick up the target prop within the first set duration, wherein the first set duration is positively correlated with the injury degree of the second virtual object when being hit by the energy particles, so that the duration of the second virtual object is shielded to pick up the target prop is thinned according to the injury degree of the second virtual object when being hit by the energy particles, and the man-machine interaction efficiency can be further improved.
For example, when the attacking prop is a tossable prop, the extent of injury when the second virtual object is hit by the energy particles may be determined according to the distance between the second virtual object and the landing point of the tossable prop (corresponding to the above-mentioned collision point, e.g. the collision point of the tossable prop with the ground), wherein the extent of injury is inversely related to the distance (as the tossable prop explodes on the landing point, the density of the energy particles at the landing point is the largest and decreases with increasing distance from the landing point), i.e. the closer the distance is, the greater the extent of injury, the longer the first set time period, e.g. the first set time period may be 60 seconds when the distance between the second virtual object and the landing point of the tossable prop is less than the first distance threshold (e.g. 10 meters); when the distance between the second virtual object and the landing point of the tossable prop is greater than a first distance threshold (e.g., 10 meters) and less than a second distance threshold (e.g., 20 meters), the first set duration may be 40 seconds; the first set duration may be 20 seconds when the distance between the second virtual object and the landing point of the tossable prop is greater than a second distance threshold (e.g., 20 meters) and less than a third distance threshold (e.g., 30 meters).
For example, when the attack prop is a shooting prop to which the skill chip is applied, the injury degree of the second virtual object when the second virtual object is hit by the energy particle may be determined according to the distance between the second virtual object and the shooting prop, where the injury degree is inversely related to the distance, that is, the closer the distance is, the greater the injury degree is, the longer the first set duration is, for example, when the distance between the second virtual object and the shooting prop held by the first virtual object is less than a fourth distance threshold (for example, 5 meters), the first set duration may be 40 seconds; when the distance between the shooting props held by the second virtual object and the first virtual object is greater than a fourth distance threshold (e.g., 5 meters) and less than a fifth distance threshold (e.g., 15 meters), the first set duration may be 30 seconds; when the distance between the second virtual object and the shooting prop held by the first virtual object is greater than a fifth distance threshold (e.g., 15 meters) and less than a sixth distance threshold (e.g., 30 meters), the first set duration may be 20 seconds.
For example, referring to fig. 4B, fig. 4B is an application scenario schematic diagram of an interactive processing method of a virtual scenario provided in an embodiment of the present application, as shown in fig. 4B, a second virtual object 405 (for example, a virtual object B controlled by a user 2) is displayed in the virtual scenario 400, where the second virtual object 405 holds a target prop 406, and a client controls the second virtual object 405 to switch from holding the target prop 406 to holding a shooting prop 407 (i.e., from a state of holding the target prop to a state of holding a gun) in response to the second virtual object 405 being hit by an energy particle, and shields the second virtual object 405 from picking up the target prop 406 for a first set period (for example, 60 seconds) (i.e., the user 2 cannot control the second virtual object 405 to pick up the target prop 406 within 60 seconds).
It should be noted that, the number of dropped target props of the second virtual object may be directly related to the injury degree of the second virtual object when the second virtual object is hit by the energy particles, that is, the greater the injury degree, the greater the number of dropped target props. For example, taking the second virtual object as the virtual object B controlled by the user 2 as an example, assuming that the virtual object B always holds 7 target props, when the virtual object B is hit by the energy particles, the number of target props that the virtual object B falls can be determined according to the injury degree, for example, taking the attack props as the virtual EMP grenade, when the distance between the virtual object B and the landing point of the virtual EMP grenade is smaller than a first distance threshold (for example, 10 meters), 4 target props fall, and when the distance between the virtual object B and the landing point of the virtual EMP grenade is greater than the first distance threshold and smaller than a second distance threshold (for example, 20 meters), 2 target props fall, so that when the second virtual object is hit by the energy particles, a plurality of target props can fall at one time, thereby further improving the efficiency of man-machine interaction.
In other embodiments, the client may also perform the following: and responding to the third virtual object in the virtual scene being hit by the attack prop and the third virtual object not holding the target prop, and shielding the third virtual object from using the attack prop within a second set time period, wherein the second set time period is positively correlated with the injury degree of the third virtual object when being hit by the energy particles.
Taking the third virtual object as the virtual object D controlled by the user 4 as an example, assuming that the virtual object D does not currently hold the target prop (e.g., the virtual object D is in a gun-holding state), when the virtual object D is hit by the energy particle, the client may shield the virtual object D from using the attack prop within a second set period of time (i.e., the virtual object D loses the function of using the attack prop within the second set period of time), where the second set period of time is positively related to the injury degree of the virtual object D when it is hit by the energy particle. It should be noted that, regarding the determination of the injury degree of the virtual object D, reference may be made to the above description, and the embodiments of the present application are not repeated here.
In step 304, in response to the virtual base in the virtual scene being hit by the offending prop, a virtual barrier surrounding the virtual base is displayed.
Here, an hit by an offending prop may refer to a hit by an energetic particle released by the offending prop, and a virtual barrier may be used to shield at least one of: the virtual object in the virtual scene enters the virtual base, and the target prop is stored in the virtual base.
In some embodiments, taking an example of an attacking prop as a tossable prop, since the tossable prop is affected in scope (i.e., the virtual base is affected as long as it is within a certain range), the virtual base is considered to be hit by energy particles released when the tossable prop explodes when the virtual base is in an area centered around the landing point of the tossable prop (corresponding to the above-mentioned impact point, e.g., the impact point of the tossable prop with the ground), e.g., a circular area centered around the landing point of the tossable prop and having a radius of 30 meters.
In other embodiments, the duration of the display of the virtual barrier may be positively correlated with the extent of injury when the virtual base is hit by the energetic particle. For example, when the attack prop is a tossable prop, the damage degree of the virtual base when the virtual base is hit by the energy particles can be determined according to the distance between the virtual base and the landing point of the tossable prop, wherein the damage degree is inversely related to the distance, that is, the closer the distance is, the greater the damage degree is, and the longer the display duration of the virtual barrier is. For example, when the distance between the virtual base and the landing point of the tossable prop is less than a first distance threshold (e.g., 10 meters), the virtual barrier may be displayed for a duration of 60 seconds (i.e., at least one of prohibiting the virtual object from entering and depositing into the virtual base for 60 seconds that the virtual base is hit); when the distance between the virtual base and the landing point of the tossable prop is greater than the first distance threshold and less than the second distance threshold (e.g., 20 meters), the virtual barrier may be displayed for a duration of 40 seconds (i.e., at least one of prohibiting the virtual object from entering and depositing into the virtual base for 40 seconds the virtual base is hit); when the distance between the virtual base and the landing point of the throwing prop is greater than the second distance threshold and less than the third distance threshold (for example, 30 meters), the display duration of the virtual barrier can be 20 seconds (namely, at least one of the virtual object is forbidden to enter the virtual base and the target prop is forbidden to be stored in the virtual base within 20 seconds of the virtual base being hit), so that the display duration of the virtual barrier is further refined according to the injury degree of the virtual base when the virtual base is hit by the energy particles, the application mode of the attack prop is richer, and the efficiency of man-machine interaction is further improved.
In some embodiments, taking the example of the attack prop as the tossable prop, the user may control the virtual object to tosse the tossable prop to the virtual base of the hostile or to tosse the tossable prop to the virtual base of the hostile, i.e. the virtual base hit by the energy particle in step 304 may be the virtual base of the first camp or the virtual base of the second camp, which will be described below.
For example, when the virtual base hit by the attack prop is the virtual base of the first camp, the client may implement step 304 described above by: in response to a virtual base of a first camp in a virtual scene being hit by an offending prop, a first virtual barrier surrounding the virtual base of the first camp is displayed, wherein the first virtual barrier is used to shield virtual objects in the virtual scene other than the first camp from entering the virtual base of the first camp.
For example, taking a tossable prop as a virtual EMP grenade, a first lineup as lineup a, and a first virtual object as a virtual object a controlled by user 1 as an example, where the virtual object a belongs to lineup a, a client receives a click operation of the user 1 on a tossing control displayed in a man-machine interaction interface, controls the virtual object a to tossing the virtual EMP grenade towards the virtual base of lineup a, and then the client responds to an energy particle hit released when the virtual base of lineup a is exploded by the virtual EMP grenade, displays a virtual barrier surrounding the virtual base of lineup a, where the virtual barrier is used to shield virtual objects except lineup a in a virtual scene from entering the virtual base of lineup a, so that other virtual objects of lineup can be prevented from obtaining target props from the virtual base of lineup a, application modes of the attack props are enriched, and efficiency of man-machine interaction is further improved.
For example, when the virtual base hit by the attack prop is the virtual base of the second camp, the client may further implement step 304 described above by: in response to the virtual base of the second camp in the virtual scene being hit by the offending prop, displaying a second virtual barrier surrounding the virtual base of the second camp, wherein the second virtual barrier is to mask at least one of: and a virtual object included in the second camp in the virtual scene enters a virtual base of the second camp and is stored into the target prop in the virtual base of the second camp.
For example, taking a tossable prop as a virtual EMP mine, a first camp as a camp a, a second camp as a camp B, and a first virtual object as a virtual object a controlled by a user 1 as an example, wherein the virtual object a belongs to the camp a, a client receives a click operation of the user 1 on a tossing control displayed in a man-machine interaction interface, controls the virtual object a to tosse the virtual EMP mine towards a virtual base of the camp B, and then the client displays a virtual barrier surrounding the virtual base of the camp B in response to the impact of energy particles released when the virtual base of the camp B is exploded by the virtual EMP mine, wherein the virtual barrier may be used to shield at least one of: virtual objects included in the camping B in the virtual scene enter the virtual base of the camping B and are stored in the target prop in the virtual base of the camping B, so that the task progress of the camping B can be delayed by throwing virtual EMP mines towards the virtual base of the camping B, the application modes of the virtual EMP mines are enriched, and compared with the mode that the task progress of the camping B is delayed by clicking and killing the virtual objects included in the camping B, the efficiency of man-machine interaction is improved.
In other embodiments, when the virtual base hit by the attack prop is the virtual base of the second camp and at least one target prop has been stored in the virtual base of the second camp, the client may further perform one of the following processes when the virtual base of the second camp in the virtual scene is hit by the attack prop: continuing to store at least one target prop in the virtual base of the second camp; randomly assigning at least one target prop to at least one virtual object in a second camp; and re-dispersing the at least one target prop in the virtual scene.
Taking a second camp as a camp B, and taking a camp B having 3 target props stored therein as an example, the client may further execute one of the following processes when responding to the virtual base of the camp B in the virtual scene being hit by the energy particles: the 3 target props are stored in the virtual base of the camping B continuously (namely, when the virtual base of the camping B is hit by energy particles released by attack props used by virtual objects of the enemy camping, the stored target props in the virtual base of the camping B are not influenced); 3 target props are randomly allocated to 3 virtual objects of the matrix B (namely, when the virtual base of the matrix B is hit by energy particles released by attack props used by virtual objects of the enemy matrix, the virtual base of the matrix B can temporarily lose the function of storing the target props, and the target props stored in the virtual base of the matrix B can be randomly allocated to the virtual objects of the matrix B); 3 target props are scattered in the virtual scene again (namely, when the virtual base of the matrix B is hit by energy particles released by attack props used by virtual objects of the enemy matrix, the virtual base of the matrix B can temporarily lose the function of storing the target props, and meanwhile, the target props stored in the virtual base of the matrix B can be scattered in the virtual scene again), so that compared with the mode of killing the virtual objects of the matrix B by clicking, the task progress of the matrix B is delayed, and the efficiency of man-machine interaction is effectively improved.
The following description will take an example in which at least one target prop will be scattered again in the virtual scene.
For example, referring to fig. 4C, fig. 4C is an application scenario diagram of an interactive processing method of a virtual scenario provided in the embodiment of the present application, as shown in fig. 4C, a first virtual object 401 (for example, a virtual object a controlled by a user 1) and a tossable prop 404 (for example, a virtual EMP grenade) held by the first virtual object 401 are displayed in the virtual scenario 400. In addition, a second camp virtual base 408 (e.g., camp B virtual base) is displayed in the virtual scene 400, and the client displays a virtual barrier 409 surrounding the second camp virtual base 408 in response to the second camp virtual base 408 being hit by energy particles released when the tossable prop 404 explodes, and may also display that the target prop 406 already stored in the second camp virtual base 408 flies out of the second camp virtual base 408 and is scattered again in the virtual scene 400.
In other embodiments, there may be multiple camps in the virtual scene including the first and second camps, where each camp has at least one virtual base in the virtual scene, and the virtual base hit by the offending prop (e.g., the energy particles released by the offending prop) may be a virtual base of any one of the multiple camps, then after step 304 shown in fig. 3 is performed, step 305 shown in fig. 5B may also be performed, as will be described in connection with the step shown in fig. 5B.
In step 305, in response to the number of target props stored in the virtual base of any camping reaching the number threshold, the virtual scene is stopped and any winning hint information is displayed.
In some embodiments, the number of target properties stored in the virtual base may be taken as a factor for obtaining a win, for example, assuming that there are 4 camps in the virtual scene, namely, a camp a, a camp B, a camp C and a camp D, respectively, each of which has at least one virtual base in the virtual scene, for example, assuming that a camp a has a virtual base 1 located above the virtual scene, a camp B has a virtual base 2 located below the virtual scene, a camp C has a virtual base 3 located to the left of the virtual scene, a camp D has a virtual base 4 located to the right of the virtual scene, and in response to the fact that the number of target properties stored in the virtual base of any one of the 4 camps reaches a number threshold (e.g., 10), for example, assuming that the server detects that the number of target properties stored in the virtual base 2 reaches 10, a notification of stopping the virtual scene may be sent to the client, and the notification may further carry prompt information that the virtual base 2 obtains a win, so that the client may stop the running the virtual scene and display the prompt information after the notification that the server receives the prompt 2 and obtains a win.
In some embodiments, the target props may be generated continuously in the virtual scene along with the running progress of the virtual scene, each virtual object in the virtual scene can only hold one target props at the same time, and the target props stored in each virtual base are carried to the virtual base by the virtual objects in the corresponding campaigns.
Taking a target prop as an example of a virtual ball in a game, the virtual ball can be continuously generated in a map of a virtual scene along with the progress of the game, for example, the virtual ball can be randomly generated in the map, each virtual object in the virtual scene can only hold one virtual ball at a time, and after the virtual object carries the virtual ball to a virtual base of a host camp for storage, other virtual balls can be continuously acquired in the virtual scene.
In some embodiments, the user may control the virtual object to acquire the target prop from the virtual base of other camps in addition to controlling the virtual object to collect the target prop scattered in the virtual scene, and the client may further perform the following processing: and responding to the movement triggering operation aiming at the first virtual object, controlling the first virtual object to enter the virtual base of the second camp, and acquiring the target prop stored in the virtual base of the second camp.
By way of example, taking a first camp as a camp a, a second camp as a camp B, and a first virtual object as a virtual object a controlled by a user 1 as an example, a client responds to a movement operation triggered by the user 1 and directed against the virtual object a, controls the virtual object a to enter a virtual base of the camp B, and obtains a target prop stored in the virtual base of the camp B, for example, when a duration of the virtual object a waiting in the virtual base of the camp B reaches a duration threshold (for example, 20 seconds), the target prop stored in the virtual base of the camp B can be automatically obtained.
In other embodiments, when the virtual scene starts to run, the multiple virtual bases corresponding to the multiple camps respectively may be distributed at different positions of the virtual scene (for example, assuming that there are 4 camps in the virtual scene, the 4 camps may be distributed in 4 different directions above, below, left and right of the virtual scene), and each virtual object included in each camp is born and revived in the virtual base corresponding to the camp, the client may further perform the following processing: displaying at least one target location in the map control; and in response to a migration trigger operation for the first virtual base, moving the first virtual base to a selected target position in the at least one target position, wherein the first virtual base is a virtual base of the first camp.
For example, taking the first camp as camp a as an example, when the virtual scene starts running, the virtual base of camp a (i.e. the first virtual base) is located at the lower left corner of the virtual scene, and then the virtual scene runs, at least one target position may be displayed in the map control, for example, including a position 1 located in the middle of the virtual scene, a position 2 located at the upper left corner of the virtual scene, and a position 3 located at the upper right corner of the virtual scene; the client responds to the migration triggering operation of the user on the virtual base of the camp A, and meanwhile, the virtual base of the camp A can be moved from the position of the lower left corner to the position 2 (namely the upper left corner of the virtual scene) under the assumption that the migration triggering operation selects the position 2, so that the virtual object of the camp A can be conveniently stored in the target prop in the virtual base of the camp A, and the efficiency of man-machine interaction is further improved.
The following will further describe an interactive processing method of the virtual scene provided in the embodiment of the present application with reference to fig. 6.
For example, referring to fig. 6, fig. 6 is a flowchart of an interactive processing method of a virtual scene provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 6.
In step 401, a first virtual object is displayed in a virtual scene.
In step 402, a first virtual object is controlled to attack using an attack prop in response to a triggering operation for the attack prop.
It should be noted that, steps 401 to 402 are the same as steps 301 to 302 in fig. 3, and the embodiments of the present application are not described herein again.
In step 403, in response to the second virtual object in the virtual scene being hit by the attacking prop, the target prop held by the second virtual object is displayed to drop from the second virtual object, and the target prop is controlled to switch from the first display mode to the second display mode.
Here, the second display mode characterizes the target prop as being in an unclassifiable state.
In some embodiments, the target prop held by the second virtual object may be displayed in the first display mode (e.g., the target prop held by the second virtual object is displayed in a highlighted or lighted display mode), when the target prop falls from the second virtual object, the display mode of the target prop may be switched from the first display mode to the second display mode (e.g., the target prop that the second virtual object falls may be displayed in a non-lighted display mode), for example, when the second virtual object holds the virtual ball, the virtual ball is lighted, and when the virtual ball falls from the second virtual object, the virtual ball is switched to be displayed in a non-lighted display mode (i.e., the virtual ball changes from lighted to non-lighted) to indicate that the virtual ball is currently in a non-pickable state, and at this time, neither the first virtual object nor the second virtual object can pick up the virtual ball that falls on the ground. For example, when a user controls a virtual object to pick up the target prop, prompt information is displayed for prompting the user that the target prop is currently in an unclassifiable state and cannot be picked up, so that interaction modes based on the target prop in the virtual scene are enriched, and game experience of the user is improved.
In other embodiments, the client may further perform the following after controlling the target prop to switch from the first display mode to the second display mode: displaying a countdown control on the target prop, wherein the countdown control is configured to count down a remaining time of the state switch of the target prop, and to control the target prop to switch from the second display mode to the first display mode (e.g., from a non-blinking display mode to a blinking display mode, or from a lighted display mode to a non-lighted display mode) at an end of the countdown, wherein the first display mode characterizes the target prop as being in a pickable state.
For example, taking the target prop as the virtual ball, when the virtual ball falls from the second virtual object, the virtual ball is switched from a light-emitting display mode to a non-light-emitting display mode (for example, the virtual ball is changed from a light-emitting state to a non-light-emitting state), and meanwhile, a countdown control can be displayed on the virtual ball, wherein the countdown duration displayed in the countdown control can be any set duration (for example, 30 seconds, namely, the countdown control is displayed in a manner of 30 seconds, 29 seconds, … and 0 seconds remained), and when the countdown is finished (namely, the countdown control is displayed for 0 seconds remained), the virtual ball is changed from no light emission to light emission again to represent that the virtual ball is in a pickable state (namely, after 30 seconds, the virtual ball is changed from the non-pickable state back to the pickable state), at this time, the virtual object in the virtual scene can pick up the virtual ball.
In addition, it should be noted that, when the second virtual object holds the target prop, the user can only control the second virtual object to use the target prop to perform a near combat attack, but cannot switch to the shooting prop to perform remote shooting, that is, only after the target prop held by the second virtual object falls, the user can control the second virtual object to use the shooting prop to perform remote shooting, that is, when the second virtual object holds the target prop, the user is in a near combat state; after the target prop falls, the second virtual object may revert to a remote attack state (e.g., a gun holding state, such as controlling the second virtual object to remotely fire using a virtual firearm).
According to the interactive processing method for the virtual scene, when the energy particles released by the attack prop hit the second virtual object, the target prop held by the second virtual object can be dropped; when energy particles released by the attack prop hit the virtual base, a virtual barrier surrounding the virtual base can be generated, so that the virtual base cannot enter and lose at least one of the capability of storing the target prop, namely, different effects can be generated by the attack prop aiming at different use scenes, the application modes of the attack prop are enriched, and the efficiency of man-machine interaction is improved.
In the following, an example application of the embodiment of the present application in an actual application scenario is described by taking an attack prop as a virtual EMP grenade (hereinafter abbreviated as EMP).
The scheme of EMP provided by the related technology mostly surrounds attack and defense of modern information equipment, often serves as a countermeasure for electronic defending equipment of a defender by an attacker, and has no other application scene. That is, in the related art, the playing space of the EMP is limited by the array arrangement of the defender, which can only be used as a post-processing, and cannot be applied to most scenes, resulting in poor man-machine interaction efficiency.
In view of this, the embodiment of the application provides an interactive processing method for a virtual scene, besides the above application scene, the EMP further adds a function of interacting with a virtual object and a virtual base, which makes the applicable scene multiple-fold, and also associates with an interactive mechanism of a core task target of a game, so that a user has more options on tactics choice, thereby improving efficiency of man-machine interaction.
The mode flow of the virtual scene will be first described below.
In some embodiments, there may be four camps (also known as teams) in the virtual scene, each of which virtual base is fixed in four directions in the virtual scene at the time of the start of the game (i.e., the virtual scene just started to run), and the player-controlled virtual objects are born and revived in the virtual base. The core (e.g., virtual ball, corresponding to the target prop described above, is the target of four camps competing in the virtual scene) may be refreshed in the map as the game progresses (i.e., virtual balls may be generated in the map as the game progresses, e.g., virtual balls may be refreshed randomly in the map), while each virtual object can only carry one virtual ball at the same time, and the virtual object will have a virtual ball in place when it dies. The player can control the virtual object to carry the virtual balls scattered in the virtual scene to the virtual base of the own camp, and the camp of a set number (for example, 10) of virtual balls is stored for winning.
The basic rules of the virtual scene are explained below.
In some embodiments, the maximum time that the virtual scene can run may be a set duration (e.g., 20 minutes or 35 minutes), the number of clients accessing the virtual scene may be 4*4 =16 (i.e., there are 16 players in a game, the 16 players are assigned to 4 different teams), and the goal of the lineup to win may be the number of virtual balls stored in the virtual base reaching a number threshold (e.g., 10).
The following continues with the description of rules for virtual bases.
In some embodiments, the virtual base of each camp is fixed in position at the time of the start, for example, the virtual targets in each camp can be respectively distributed in four different directions of a map of a virtual scene, meanwhile, the virtual targets in each camp are born and revived in the virtual base of the corresponding camp, and the virtual targets in each camp need to carry the virtual ball back to the virtual base of the camp for storage; in addition, as the virtual scene is running, at least one fixed point location (corresponding to the target location described above) may be displayed in the map control, where the virtual base may be migrated. In addition, the virtual ball can be obtained from the hostile virtual base by interacting with the hostile virtual base, for example, a player can control the virtual object to enter the hostile virtual base and press a designated key to enable the virtual object to interact with the hostile virtual base, and meanwhile, the duration of the virtual object staying in the hostile virtual base is kept to reach a duration threshold (for example, 5 seconds), then the virtual ball stored in the hostile virtual base can be obtained. When a virtual base is attacked by an EMP, the virtual base may temporarily lose functions, such as failing to enter the virtual base attacked by the EMP, failing to store virtual balls in the virtual base attacked by the EMP, and failing to steal virtual balls stored in the virtual base attacked by the EMP.
For example, referring to fig. 7A, fig. 7A is an application scenario schematic diagram of an interaction processing method of a virtual scenario provided in the embodiment of the present application, as shown in fig. 7A, a virtual base 701 of a first camp and a first virtual object 702 (for example, a virtual object a controlled by a user 1) born in the virtual base 701 of the first camp are displayed in the virtual scenario 700.
The following continues with the description of prop rules in the virtual scene.
In some embodiments, as shown in fig. 4A, a moving virtual drone may be generated in a map of a virtual scene, and when the virtual drone is knocked down, props, such as EMPs, may be dropped randomly. Furthermore, the number of props that each virtual object in a virtual scene can be equipped with is limited, e.g., each virtual object can be equipped with at most two different props at the same time.
The following continues with the description of the EMP rules, and tactical possibilities derived from the mechanism of EMP.
In some embodiments, the EMP may be dropped from the virtual drone, and in addition, the EMP may interact with the enemy character, the placement props, and the virtual base, losing some functionality. For example, EMP may be used to block an enemy character carrying a core from dropping its core and temporarily failing to pick up; EMP can be used to destroy enemy's functional or defensive prop; EMP can be used for an adversary base, so that the adversary base can not enter and store a core temporarily, and the task progress of the adversary camping is disturbed; the EMP may be used on the own base when it is stolen, interrupting the act of the adversary character stealing the core into the my base.
For example, referring to fig. 7B, fig. 7B is an application scenario schematic diagram of an interaction processing method of a virtual scenario provided in the embodiment of the present application, as shown in fig. 7B, a virtual base 701 of a first camp and a first virtual object 702 (for example, a virtual object a controlled by a user 1) are displayed in a virtual scenario 700, where the first virtual object 702 belongs to the first camp, and then a client side, in response to the virtual base 701 of the first camp being hit by an EMP thrown by the first virtual object 702, displays a virtual shield 703 surrounding the virtual base 701 of the first camp, so that virtual objects of other camps can be prevented from entering the virtual base 701 of the first camp to steal a core.
The following describes an interactive processing method of a virtual scene according to an embodiment of the present application with reference to fig. 8.
For example, referring to fig. 8, fig. 8 is a flowchart of an interactive processing method of a virtual scene provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 8.
In step 801, a client randomly generates a virtual drone in a map of a virtual scene.
In some embodiments, after the game begins, the client may randomly generate a moving virtual drone in the map.
In step 802, the client displays the EMP dropped from the virtual drone in response to the virtual drone being knocked down.
In some embodiments, when the virtual drone is knocked down, 1-2 props may be dropped randomly, where EMP may occur.
It should be noted that, the number of the dropped props when the virtual unmanned aerial vehicle is knocked down may be related to the skill of the virtual object controlled by the player, for example, when the virtual object controlled by the player has the skill of increasing the number of the dropped props, the virtual unmanned aerial vehicle may drop 2 props; when the player-controlled virtual object does not have the skill to increase the number of dropped props, the virtual drone drops only 1 prop.
In step 803, the client controls the first virtual object to pick up the EMP in response to the pick-up trigger operation for the EMP.
In some embodiments, when the client receives a click operation of the "Q" key or the "E" key on the keyboard by the player, the first virtual object (i.e., the virtual object controlled by the current player) may be controlled to pick up the EMP and equip the EMP with the prop field of the corresponding key position.
In step 804, the client controls the first virtual object to cast the EMP in response to the casting trigger operation for the EMP.
In some embodiments, when the client receives a button corresponding to when the player clicks the EMP just picked up again, the first virtual object is controlled to cast the EMP along the parabola.
In step 805, the client detects in real time whether the EMP collides with an obstacle in the virtual scene, and when the collision occurs, step 806 is performed.
In some embodiments, the client detects in real time whether the crash box of the EMP is in contact with other models in the virtual scene, i.e. performs crash detection, as the EMP is thrown along the parabola.
In step 806, the client controls the EMP to explode, and detects whether a virtual base and a second virtual object exist in the explosion range, and when the virtual base exists, step 807 is performed; when there is a second virtual object, step 808 is performed.
In some embodiments, when a client detects a collision, the EMP is controlled to explode centered at the collision point, while detecting the presence of a virtual base and a second virtual object (i.e., an enemy character, such as virtual object B controlled by user 2) within the explosion area.
It should be noted that the two detection processes may be performed synchronously and not interfere with each other.
In step 807, the client displays a virtual shield surrounding the virtual base.
In some embodiments, when a client detects the presence of a virtual base in an explosive region, a virtual shield surrounding the virtual base may be displayed, thereby temporarily rendering the virtual base inaccessible, losing the ability to store the core, and if another camping virtual object is taking place to steal the core, its theft may be interrupted.
In step 808, the client displays that the core held by the second virtual object falls off and masks the second virtual object's ability to use props.
In some embodiments, if a client detects the presence of an enemy character in an explosive region, it may be temporarily disabled from using props. Meanwhile, whether the core is being carried can be detected, if the core is being carried, the core falls to the original place, so that the core is restored to a gun holding state, and the core cannot be picked up again within a certain time.
In other embodiments, the affected virtual base and adversary roles may resume normal function after the duration expires.
The interactive processing method of the virtual scene expands the design thought and the application scene of the EMP in the shooting game provided by the related technology, and simultaneously combines the EMP with interaction of the game core target, so that the completion of the enemy task target can be blocked, or the base of the player can be protected by utilizing an anti-theft mechanism, thus enriching tactical selection of the player, creating more game possibility and improving the efficiency of man-machine interaction.
Continuing with the description below of an exemplary structure of the virtual scene interaction processing apparatus 555 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 2, the software module stored in the virtual scene interaction processing apparatus 555 of the memory 550 may include: a display module 5551 and a control module 5552.
A display module 5551, configured to display a first virtual object in a virtual scene, where the first virtual object holds an attack prop; a control module 5552, configured to control, in response to a trigger operation for an attack prop, the first virtual object to attack using the attack prop; the display module 5551 is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop; the control module 5552 is further configured to control the second virtual object to switch from holding the target prop to holding the shooting prop, where the target prop is a target for the first camp and the second camp to rob in a game of the virtual scene, the target prop is used to determine a winning camp in the game, the first virtual object belongs to the first camp, and the second virtual object belongs to the second camp; the display module 5551 is further configured to display a virtual barrier surrounding the virtual base in response to the virtual base in the virtual scene being hit by the attack prop, where the virtual barrier is configured to mask at least one of: the virtual object in the virtual scene enters the virtual base, and the target prop is stored in the virtual base.
In some embodiments, the attack prop comprises a tossable prop, the tossable prop being dropped by a virtual flying prop in the virtual scene; display module 5551 is further configured to display a virtual flying prop in the virtual scene before displaying the tossable prop held by the first virtual object; and means for displaying a throwable prop dropped from the virtual flying prop in response to the virtual flying prop being attacked; control module 5552 is also for controlling the first virtual object to pick up the tossable items in response to a pick up trigger operation for the tossable items.
In some embodiments, control module 5552 is further to control the first virtual object to throw the tossable item in a first direction in response to a throwing trigger operation for the tossable item, wherein the first direction is a direction selected by the throwing trigger operation; and controlling the tossable item to be centered at the point of impact and to release energetic particles in the virtual scene in an outward radiating manner in response to the tossable item colliding with an obstacle in the virtual scene.
In some embodiments, the attack prop comprises a shooting prop to which a skill chip is applied, the original function of the shooting prop being to launch a virtual bullet, the skill chip being to replace the virtual bullet with an energetic particle; the control module 5552 is further configured to control the first virtual object obtaining skill chip; the interaction processing device 555 of the virtual scene further comprises an application module 5553, configured to apply the skill chip in the shooting prop held by the first virtual object in response to the triggering operation for the skill chip.
In some embodiments, the skill chip has a cool down time; the interaction processing device 555 of the virtual scene further comprises a shielding module 5554 and a determining module 5555, wherein the shielding module 5554 is used for shielding response to trigger operation for the skill chip when the interval duration between the first moment and the second moment is smaller than the cooling time; a determining module 5555 for determining that a trigger operation for the skill chip is to be responded to when the interval time between the first time and the second time is greater than or equal to the cooling time; the first moment is the moment when the skill chip is applied to the shooting prop for the last time, and the second moment is the moment when the trigger operation is received.
In some embodiments, the skill chip is dropped by a virtual flying prop in the virtual scene; the display module 5551 is further configured to display a virtual flying prop in the virtual scene; and a skill chip for displaying a dropped skill from the virtual flying prop in response to the virtual flying prop being attacked; the control module 5552 is further configured to control the first virtual object to pick up the skill chip in response to a pick-up trigger operation for the skill chip.
In some embodiments, the control module 5552 is further configured to control, in response to a firing trigger operation for the firing prop, the first virtual object to release energy particles using the firing prop in a second direction in the virtual scene, wherein the second direction is a direction selected by the firing trigger operation.
In some embodiments, the number of items dropped when the virtual flying item is attacked is related to the skill of the first virtual object, wherein the number of items dropped when the virtual flying item is attacked is increased when the first virtual object has the skill of increasing the number of items dropped compared to when the first virtual object does not have the skill of increasing the number of items dropped.
In some embodiments, the shielding module 5554 is further configured to shield the second virtual object from picking up the target prop within a first set period of time, where the first set period of time is positively related to the injury level of the second virtual object when hit by the energy particle.
In some embodiments, the shielding module 5554 is further configured to, in response to the third virtual object in the virtual scene being hit by the energy particle and the third virtual object not holding the target prop, shield the third virtual object from using the attack prop for a second set period of time, where the second set period of time is positively correlated with the injury extent when the third virtual object is hit by the energy particle.
In some embodiments, there are a plurality of camps including the first and second camps in the virtual scene, and each camp has at least one virtual base in the virtual scene, the virtual base hit by the energy particle being a virtual base of any one of the plurality of camps; the interaction processing device 555 of the virtual scene further comprises a stopping module 5556, configured to stop running the virtual scene in response to the number of target props stored in any of the camping virtual bases reaching a number threshold; the display module 5551 is further configured to display any prompt message indicating that the camping is winning.
In some embodiments, the target props are generated continuously in the virtual scene along with the running progress of the virtual scene, each virtual object in the virtual scene can only hold one target prop at the same time, and the target props stored in each virtual base are carried to the virtual base by the virtual objects included in the corresponding campaigns.
In some embodiments, when the virtual scene starts to run, a plurality of virtual bases corresponding to a plurality of camps are distributed at different positions of the virtual scene, and virtual objects included in each of the camps are born and revived in the virtual base corresponding to the camps; the display module 5551 is further configured to display at least one target location in the map control; the interaction processing device 555 of the virtual scene further includes a moving module 5557, configured to move the first virtual base to a selected target position of the at least one target positions in response to a migration trigger operation for the first virtual base, where the first virtual base is a virtual base of the first camp.
In some embodiments, the control module 5552 is further configured to control the first virtual object to enter the virtual base of the second camp in response to a movement trigger operation for the first virtual object; the interaction processing device 555 of the virtual scene further includes an obtaining module 5558, configured to obtain a target prop stored in the virtual base of the second camp.
In some embodiments, when the virtual base hit by the attack prop is the virtual base of the first camp, the display module 5551 is further configured to display a first virtual barrier surrounding the virtual base of the first camp in response to the virtual base of the first camp in the virtual scene being hit by the attack prop, where the first virtual barrier is configured to shield virtual objects in the virtual scene other than the first camp from entering the virtual base of the first camp.
In some embodiments, when the virtual base hit by the offending prop is the virtual base of the second camp, the display module 5551 is further configured to display a second virtual barrier surrounding the virtual base of the second camp in response to the virtual base of the second camp in the virtual scene being hit by the offending prop, wherein the second virtual barrier is configured to mask at least one of: and a virtual object included in the second camp in the virtual scene enters a virtual base of the second camp and is stored into the target prop in the virtual base of the second camp.
In some embodiments, when at least one target prop has been stored in the virtual base of the second camp, the determining module 5555 is further configured to perform one of the following processes in response to the virtual base of the second camp in the virtual scene being hit by the energy particle: continuing to store at least one target prop in the virtual base of the second camp; randomly assigning at least one target prop to at least one virtual object in a second camp; and re-dispersing the at least one target prop in the virtual scene.
In some embodiments, the number of target props that the second virtual object falls is positively correlated with the extent of injury that the second virtual object was hit by the energy particles; the display duration of the virtual barrier is positively correlated to the extent of injury when the virtual base is hit by the energetic particle.
In some embodiments, the display module 5551 is further configured to display a first virtual object in the virtual scene, where the first virtual object holds an attack prop; the control module 5552 is further configured to control, in response to a triggering operation for the attack prop, the first virtual object to attack using the attack prop; the display module 5551 is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop; the control module 5552 is further configured to control the target prop to switch from the first display mode to the second display mode, wherein the second display mode characterizes the target prop as being in an unobservable state.
In other embodiments, the display module 5551 is further configured to display a countdown control on the target prop, where the countdown control is configured to count down the remaining time of the state switch of the target prop, and to control the target prop to switch from the second display mode to the first display mode when the countdown is completed, where the first display mode characterizes that the target prop is in a pickable state.
In some embodiments, control module 5552 is further for controlling the second virtual object to switch from holding the target prop to holding the shooting prop; when the second virtual object holds the target prop, the target prop can only be used for near combat attack, and the shooting prop can not be used for remote shooting.
It should be noted that, the description of the apparatus in the embodiment of the present application is similar to the description of the embodiment of the method described above, and has similar beneficial effects as the embodiment of the method, so that a detailed description is omitted. The technical details of the interaction processing device for virtual scenes provided in the embodiments of the present application may be understood according to the description of any one of fig. 3, fig. 5A, fig. 5B, or fig. 6.
Embodiments of the present application provide a computer program product comprising a computer program or computer-executable instructions stored in a computer-readable storage medium. The processor of the electronic device reads the computer executable instructions from the computer readable storage medium, and the processor executes the computer executable instructions, so that the electronic device executes the interactive processing method of the virtual scene in the embodiment of the application.
The embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions, in which the computer-executable instructions are stored, which when executed by a processor, cause the processor to perform an interactive processing method of a virtual scene provided by the embodiments of the present application, for example, an interactive processing method of a virtual scene as shown in fig. 3, 5A, 5B, or 6.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may be deployed to be executed on one electronic device or on multiple electronic devices located at one site or, alternatively, on multiple electronic devices distributed across multiple sites and interconnected by a communication network.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and scope of the present application are intended to be included within the scope of the present application.

Claims (25)

1. An interactive processing method of a virtual scene is characterized by comprising the following steps:
displaying a first virtual object in a virtual scene, wherein the first virtual object holds an attack prop;
responding to triggering operation for the attack prop, and controlling the first virtual object to attack by using the attack prop;
in response to a second virtual object in the virtual scene being hit by the attack prop, displaying that a target prop held by the second virtual object falls from the second virtual object, and controlling the second virtual object to switch from holding the target prop to holding a shooting prop, wherein the target prop is a target of first camping and second camping in a pair of the virtual scene, the target prop is used for determining a winning camping in the pair, the first virtual object belongs to the first camping, and the second virtual object belongs to the second camping;
In response to a virtual base in the virtual scene being hit by the attack prop, displaying a virtual barrier surrounding the virtual base, wherein the virtual barrier is to mask at least one of: and a virtual object in the virtual scene enters the virtual base and is stored into the virtual base to be the target prop.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the attack prop comprises a tossable prop, which is dropped by a virtual flying prop in the virtual scene;
before displaying the tossable prop held by the first virtual object, the method further comprises:
displaying the virtual flying prop in the virtual scene;
responsive to the virtual flying prop being attacked, displaying the tossable prop dropped from the virtual flying prop;
the method further includes controlling the first virtual object to pick up the tossable item in response to a pick up trigger operation for the tossable item.
3. The method of claim 2, wherein the controlling the first virtual object to attack using the attack prop in response to the triggering operation for the attack prop comprises:
Controlling the first virtual object to throw the tossable prop in a first direction in response to a throwing trigger operation for the tossable prop, wherein the first direction is a direction selected by the throwing trigger operation;
in response to the tossable prop colliding with an obstacle in the virtual scene, the tossable prop is controlled to be centered at the point of collision and to release energetic particles in the virtual scene in an outward radiating manner.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the attack prop comprises a shooting prop with a skill chip, wherein the shooting prop has the original function of shooting a virtual bullet, and the skill chip is used for replacing the virtual bullet with an energy particle;
before responding to the triggering operation for the attack prop, the method further comprises:
controlling the first virtual object to acquire the skill chip;
in response to a triggering operation for the skill chip, the skill chip is applied in the shooting prop held by the first virtual object.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the skill chip is dropped by a virtual flying prop in the virtual scene;
The controlling the first virtual object to acquire the skill chip includes:
displaying the virtual flying prop in the virtual scene;
responsive to the virtual flying prop being attacked, displaying the skill chip dropped from the virtual flying prop;
and controlling the first virtual object to pick up the skill chip in response to a pick-up trigger operation for the skill chip.
6. The method of claim 4, wherein the controlling the first virtual object to attack using the attack prop in response to the triggering operation for the attack prop comprises:
and in response to a shooting trigger operation for the shooting prop, controlling the first virtual object to release energy particles in a second direction in the virtual scene by using the shooting prop, wherein the second direction is a direction selected by the shooting trigger operation.
7. The method of claim 2, 5, or 6, wherein the method comprises,
the number of items that fall when the virtual flying item is attacked is related to the skill that the first virtual object has, wherein when the first virtual object has the skill to increase the number of items that fall, the number of items that fall when the virtual flying item is attacked increases compared to when the first virtual object does not have the skill to increase the number of items that fall.
8. The method of claim 1, wherein in response to a second virtual object in the virtual scene being hit by the attack prop, the method further comprises:
and shielding the second virtual object from picking up the target prop within a first set time length, wherein the first set time length is positively correlated with the injury degree of the second virtual object when the second virtual object is hit by the attack prop.
9. The method according to claim 1, wherein the method further comprises:
and responding to the condition that a third virtual object in the virtual scene is hit by the attack prop and the third virtual object does not hold the target prop, and shielding the third virtual object from using the attack prop within a second set time length, wherein the second set time length is positively correlated with the injury degree of the third virtual object when being hit by the attack prop.
10. The method of claim 1, wherein the step of determining the position of the substrate comprises,
a plurality of camps including the first camping and the second camping exist in the virtual scene, each of the camps has at least one virtual base in the virtual scene, and the virtual base hit by the attack prop is a virtual base of any one of the plurality of camps;
The method further comprises the steps of:
and stopping running the virtual scene and displaying prompt information of winning of any camping in response to the number of the target props stored in any virtual base of the camping reaching a number threshold.
11. The method of claim 10, wherein the step of determining the position of the first electrode is performed,
the target props are continuously generated in the virtual scene along with the running progress of the virtual scene, each virtual object in the virtual scene can only hold one target prop at the same time, and the target props stored in each virtual base are carried to the virtual base by the virtual objects included in the corresponding camps.
12. The method of claim 10, wherein the step of determining the position of the first electrode is performed,
when the virtual scene starts to run, a plurality of virtual bases corresponding to the camps are distributed at different positions of the virtual scene, and virtual objects included by each camp are born and revived in the virtual base corresponding to the camps;
the method further comprises the steps of:
displaying at least one target location in the map control;
and in response to a migration trigger operation for a first virtual base, moving the first virtual base to a selected target position in the at least one target position, wherein the first virtual base is a virtual base of the first camp.
13. The method according to claim 1, wherein the method further comprises:
and responding to the movement triggering operation aiming at the first virtual object, controlling the first virtual object to enter the virtual base of the second camp, and acquiring the target prop stored in the virtual base of the second camp.
14. The method of claim 1, wherein the step of determining the position of the substrate comprises,
when the virtual base hit by the attack prop is the virtual base of the first camp, the responding to the virtual base in the virtual scene being hit by the attack prop displays a virtual barrier surrounding the virtual base, including:
in response to the virtual base of the first camp in the virtual scene being hit by the attack prop, a first virtual barrier surrounding the virtual base of the first camp is displayed, wherein the first virtual barrier is used for shielding virtual objects except the first camp in the virtual scene from entering the virtual base of the first camp.
15. The method of claim 1, wherein the step of determining the position of the substrate comprises,
when the virtual base hit by the attack prop is the virtual base of the second camp, the responding to the virtual base in the virtual scene being hit by the attack prop displays a virtual barrier surrounding the virtual base, including:
In response to the virtual base of the second camp in the virtual scene being hit by the attack prop, displaying a second virtual barrier surrounding the virtual base of the second camp, wherein the second virtual barrier is to mask at least one of: and a virtual object included in the second camp in the virtual scene enters a virtual base of the second camp and is stored in the target prop into the virtual base of the second camp.
16. The method of claim 15, wherein the step of determining the position of the probe is performed,
when at least one of the target props has been stored in the second camp's virtual base, upon being hit by the attack props in response to the second camp's virtual base in the virtual scene, the method further comprises:
one of the following processes is performed:
continuing to store at least one of the target props in the virtual base of the second camp;
randomly assigning at least one of the target props to at least one virtual object in the second camp;
and re-dispersing at least one target prop in the virtual scene.
17. An interactive processing method of a virtual scene is characterized by comprising the following steps:
Displaying a first virtual object in a virtual scene, wherein the first virtual object holds an attack prop;
responding to triggering operation for the attack prop, and controlling the first virtual object to attack by using the attack prop;
and responding to the second virtual object in the virtual scene being hit by the attack prop, displaying that a target prop held by the second virtual object falls from the second virtual object, and controlling the target prop to switch from a first display mode to a second display mode, wherein the second display mode represents that the target prop is in an unclassifiable state.
18. The method of claim 17, wherein after controlling the target prop to switch from the first display mode to the second display mode, the method further comprises:
and displaying a countdown control on the target prop, wherein the countdown control is used for counting down the residual time of state switching of the target prop, and controlling the target prop to switch from the second display mode to the first display mode when the countdown is finished, and the first display mode represents that the target prop is in a pickable state.
19. The method of claim 17, wherein the step of determining the position of the probe is performed,
the target prop is a target of robbing a first camp and a second camp in a game of the virtual scene, the target prop is used for determining a winning party in the game, the first virtual object belongs to the first camp, and the second virtual object belongs to the second camp.
20. The method of claim 17, wherein in response to a second virtual object in the virtual scene being hit by the attack prop, the method further comprises:
controlling the second virtual object to switch from holding the target prop to holding a shooting prop;
when the second virtual object holds the target prop, the target prop can only be used for near combat attack, and the shooting prop can not be used for remote shooting.
21. An interactive processing apparatus for a virtual scene, the apparatus comprising:
the display module is used for displaying a first virtual object in the virtual scene, wherein the first virtual object holds an attack prop;
the control module is used for responding to the triggering operation for the attack prop and controlling the first virtual object to attack by using the attack prop;
The display module is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop;
the control module is further configured to control the second virtual object to switch from holding the target prop to holding the shooting prop, where the target prop is a target that is robbed by a first camp and a second camp in a counter of the virtual scene, the target prop is used to determine a winning camp in the counter, the first virtual object belongs to the first camp, and the second virtual object belongs to the second camp;
the display module is further configured to display a virtual barrier surrounding the virtual base in response to the virtual base in the virtual scene being hit by the attack prop, where the virtual barrier is configured to mask at least one of: and a virtual object in the virtual scene enters the virtual base and is stored into the virtual base to be the target prop.
22. An interactive processing apparatus for a virtual scene, the apparatus comprising:
the display module is used for displaying a first virtual object in the virtual scene, wherein the first virtual object holds an attack prop;
The control module is used for responding to the triggering operation for the attack prop and controlling the first virtual object to attack by using the attack prop;
the display module is further configured to display that a target prop held by a second virtual object in the virtual scene falls from the second virtual object in response to the second virtual object being hit by the attack prop;
the control module is further configured to control the target prop to switch from a first display mode to a second display mode, where the second display mode characterizes the target prop as being in an unclassifiable state.
23. An electronic device, the electronic device comprising:
a memory for storing executable instructions;
a processor for implementing the method for interactive processing of a virtual scene according to any one of claims 1 to 16 or any one of claims 17 to 20 when executing executable instructions stored in said memory.
24. A computer readable storage medium storing computer executable instructions which, when executed by a processor, implement the method of interactive processing of a virtual scene according to any one of claims 1 to 16 or any one of claims 17 to 20.
25. A computer program product comprising a computer program or computer executable instructions which, when executed by a processor, implement the method of interactive processing of a virtual scene as claimed in any one of claims 1 to 16 or any one of claims 17 to 20.
CN202211011522.2A 2022-08-23 2022-08-23 Interactive processing method and device for virtual scene, electronic equipment and storage medium Pending CN117654038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211011522.2A CN117654038A (en) 2022-08-23 2022-08-23 Interactive processing method and device for virtual scene, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211011522.2A CN117654038A (en) 2022-08-23 2022-08-23 Interactive processing method and device for virtual scene, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117654038A true CN117654038A (en) 2024-03-08

Family

ID=90064631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211011522.2A Pending CN117654038A (en) 2022-08-23 2022-08-23 Interactive processing method and device for virtual scene, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117654038A (en)

Similar Documents

Publication Publication Date Title
CN113181650B (en) Control method, device, equipment and storage medium for calling object in virtual scene
US20220379219A1 (en) Method and apparatus for controlling virtual object to restore attribute value, terminal, and storage medium
US20230013014A1 (en) Method and apparatus for using virtual throwing prop, terminal, and storage medium
CN110732135B (en) Virtual scene display method and device, electronic equipment and storage medium
CN112121414B (en) Tracking method and device in virtual scene, electronic equipment and storage medium
CN111921198B (en) Control method, device and equipment of virtual prop and computer readable storage medium
US20230052088A1 (en) Masking a function of a virtual object using a trap in a virtual environment
CN112057863A (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN112057864B (en) Virtual prop control method, device, equipment and computer readable storage medium
CN111298441A (en) Using method, device, equipment and storage medium of virtual prop
CN111659116A (en) Virtual vehicle control method, device, equipment and medium
CN112057860B (en) Method, device, equipment and storage medium for activating operation control in virtual scene
US20220379209A1 (en) Virtual resource display method and related apparatus
US20230033530A1 (en) Method and apparatus for acquiring position in virtual scene, device, medium and program product
CN113457151B (en) Virtual prop control method, device, equipment and computer readable storage medium
CN113769394B (en) Prop control method, device, equipment and storage medium in virtual scene
CN111202983A (en) Method, device, equipment and storage medium for using props in virtual environment
CN112402964B (en) Using method, device, equipment and storage medium of virtual prop
CN112121432B (en) Control method, device and equipment of virtual prop and computer readable storage medium
CN113703654B (en) Camouflage processing method and device in virtual scene and electronic equipment
CN117654038A (en) Interactive processing method and device for virtual scene, electronic equipment and storage medium
CN112121433A (en) Method, device and equipment for processing virtual prop and computer readable storage medium
CN112755518A (en) Interactive property control method and device, computer equipment and storage medium
CN112717394A (en) Display method, device and equipment of aiming mark and storage medium
WO2023134660A1 (en) Partner object control method and apparatus, and device, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination