CN112121433A - Method, device and equipment for processing virtual prop and computer readable storage medium - Google Patents
Method, device and equipment for processing virtual prop and computer readable storage medium Download PDFInfo
- Publication number
- CN112121433A CN112121433A CN202011063621.6A CN202011063621A CN112121433A CN 112121433 A CN112121433 A CN 112121433A CN 202011063621 A CN202011063621 A CN 202011063621A CN 112121433 A CN112121433 A CN 112121433A
- Authority
- CN
- China
- Prior art keywords
- virtual
- item
- package
- prop
- virtual object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000012545 processing Methods 0.000 title claims abstract description 33
- 230000008859 change Effects 0.000 claims abstract description 45
- 230000004044 response Effects 0.000 claims abstract description 25
- 230000033001 locomotion Effects 0.000 claims abstract description 23
- 230000015654 memory Effects 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 18
- 230000000694 effects Effects 0.000 claims description 16
- 238000003672 processing method Methods 0.000 claims description 11
- 231100000225 lethality Toxicity 0.000 claims description 9
- 230000001960 triggered effect Effects 0.000 claims description 8
- 230000009471 action Effects 0.000 claims description 7
- 230000004888 barrier function Effects 0.000 claims description 4
- 230000003993 interaction Effects 0.000 abstract description 25
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 16
- 230000007123 defense Effects 0.000 description 14
- 238000001816 cooling Methods 0.000 description 9
- 230000008447 perception Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000006698 induction Effects 0.000 description 6
- 239000008280 blood Substances 0.000 description 5
- 210000004369 blood Anatomy 0.000 description 5
- 230000006378 damage Effects 0.000 description 5
- 238000011068 loading method Methods 0.000 description 5
- 230000016776 visual perception Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- TZCXTZWJZNENPQ-UHFFFAOYSA-L barium sulfate Chemical compound [Ba+2].[O-]S([O-])(=O)=O TZCXTZWJZNENPQ-UHFFFAOYSA-L 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000009183 running Effects 0.000 description 4
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010304 firing Methods 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 230000009191 jumping Effects 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000009194 climbing Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000002147 killing effect Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 230000009192 sprinting Effects 0.000 description 2
- 230000009182 swimming Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000014860 sensory perception of taste Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 208000015041 syndromic microphthalmia 10 Diseases 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000009184 walking Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a method, a device and equipment for processing a virtual item and a computer readable storage medium; the method comprises the following steps: presenting a virtual item packet containing at least two first virtual items in a picture of a virtual scene; controlling the virtual object to move towards the virtual item package in response to a motion instruction for the virtual object in the picture; when the virtual object moves to a target area and the virtual item package is in a pickable state, controlling the virtual object to pick up a first virtual item in the virtual item package, and presenting attribute change indication information of a second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes. Through the application, the human-computer interaction efficiency can be improved.
Description
Technical Field
The present application relates to the field of computers, and in particular, to a method, an apparatus, a device, and a computer-readable storage medium for processing a virtual item.
Background
In most applications of virtual scenes, due to the requirement of interaction between virtual objects, the virtual objects are usually equipped with virtual props, and a user interacts by controlling the virtual objects and utilizing the virtual props, so that the attributes of the virtual props equipped by the two interacting parties are changed. However, in order to change the attribute of the virtual item equipped by the user or the other party, multiple or long-time interaction is often required, and the human-computer interaction efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device and a computer readable storage medium, which can improve the human-computer interaction efficiency.
The embodiment of the application provides a method for processing a virtual item, which comprises the following steps:
presenting a virtual item packet containing at least two first virtual items in a picture of a virtual scene;
controlling the virtual object to move towards the virtual item package in response to a motion instruction for the virtual object in the picture;
when the virtual object moves to a target area and the virtual item package is in a pickable state, controlling the virtual object to pick up a first virtual item in the virtual item package, and
and presenting attribute change indication information of the second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
An embodiment of the present application provides a processing apparatus of a virtual item, including: .
The first presentation module is used for presenting a virtual item packet containing at least two first virtual items in a picture of a virtual scene;
the motion control module is used for responding to a motion instruction aiming at a virtual object in the picture, and controlling the virtual object to move towards the virtual item packet;
the picking control module is used for controlling the virtual object to pick up a first virtual prop in the virtual prop packet when the virtual object moves to a target area and the virtual prop packet is in a picking state;
and the second presentation module is used for presenting attribute change indication information of the second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
In the foregoing solution, before the virtual item package including at least two first virtual items is presented in the screen of the virtual scene, the apparatus further includes:
the throwing control module is also used for presenting the operation control of the virtual prop packet;
when the operation control is in an activated state, the virtual object is controlled to throw the virtual prop packet in response to a trigger operation for the operation control.
In the above scheme, the throwing control module is further configured to acquire a first position of the virtual object in the virtual scene, and a second position that is a target distance from the first position along the orientation of the virtual object;
when no barrier exists between the first position and the second position, controlling the virtual object to throw the virtual prop packet to the second position.
In the above scheme, before the controlling the virtual object to throw the virtual item package, the apparatus further includes:
the trigger control module is used for acquiring the state of the virtual object;
when the state of the virtual object represents that the virtual object is in a state capable of throwing the virtual item package, triggering and controlling the virtual object to throw the virtual item package;
when the state of the virtual object represents that the virtual object is in a state that the virtual item packet cannot be thrown, presenting indication information indicating that the virtual item packet is thrown unsuccessfully.
In the above scheme, after the controlling the virtual object to throw the virtual item package, the apparatus further includes:
the third presentation module is used for acquiring a throwing position of the virtual item package when the virtual object releases the virtual item package;
and when the throwing position of the virtual prop packet does not meet the position throwing condition, presenting indication information indicating that the virtual prop packet fails to throw.
In the above scheme, the apparatus further comprises:
a fourth presentation module for presenting a map thumbnail of the virtual scene;
and presenting the position information of the virtual object and the virtual item package in the map thumbnail, wherein the position information is used for controlling the virtual object to move towards the virtual item package based on the position information.
In the above scheme, after controlling the virtual object to move towards the virtual item package, the first presentation module is further configured to display the virtual item package in a first display style when the virtual item package is in a pickable state;
and when the virtual prop package is in a non-picking state, displaying the virtual prop package in a second display style different from the first display style.
In the above scheme, when the virtual object moves to a target area and the virtual item package is in a pickable state, the pickup control module is further configured to present the sensing area when the target area is a sensing area where the virtual item package is used for sensing the virtual object;
when the virtual object moves to the sensing area and the virtual prop package is in a pickup state, the virtual object is triggered to pick up a first virtual prop in the virtual prop package through the sensing action of the virtual prop package.
In the above scheme, the pickup control module is further configured to present a pickup function icon of the virtual item package;
and in response to the triggering operation aiming at the picking-up function icon, controlling the virtual object to pick up a first virtual item in the virtual item package.
In the above scheme, the pickup control module is further configured to display the at least two types of first virtual items in the virtual item package in different display manners when the types of the first virtual items include at least two types;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop.
In the foregoing scheme, the pickup control module is further configured to, when the types of the second virtual items include at least two types, obtain a type of a second virtual item with a highest consumption degree among the at least two types of second virtual items;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop with the highest consumption degree.
In the foregoing scheme, the pickup control module is further configured to, when the types of the second virtual items include at least two types, obtain, based on usage preferences of the virtual object for each of the second virtual items, a type of a second virtual item with a highest preference degree among the at least two types of second virtual items;
and controlling the virtual object to pick up the first virtual prop of the target type matched with the type of the second virtual prop with the highest preference degree.
In the above scheme, the pickup control module is further configured to, when the types of the first virtual items in the virtual item package include at least two types, present a lethality index of each of the first virtual items in the virtual item package for the target virtual object;
and controlling the virtual object to pick up the first virtual prop with the highest lethality index.
In the above scheme, the pickup control module is further configured to present the number of times each first virtual item is picked up when the types of the first virtual item in the virtual item package include at least two types;
and controlling the virtual object to pick up the first virtual prop with the maximum picking-up times.
In the above scheme, the pickup control module is further configured to display the at least two types of first virtual items in the virtual item package in different display manners when the types of the first virtual items include at least two types;
acquiring the assigned role of the virtual object in the team;
and controlling the virtual object to pick up a first virtual prop of a target type matched with the role.
In the foregoing scheme, the second presenting module is further configured to present an attribute change special effect of the second virtual item, where the attribute change special effect is used to indicate that an attribute of the second virtual item equipped by the virtual object changes.
In the above scheme, before the control of the virtual object to pick up the first virtual item in the virtual item package, the second presentation module is further configured to present an attribute value of the second virtual item, where the attribute value is a first attribute value;
correspondingly, after the virtual object is controlled to pick up the first virtual item in the virtual item package, the second presentation module is further configured to present that the attribute value of the second virtual item is changed from the first attribute value to the second attribute value.
In the foregoing scheme, the second presenting module is further configured to present, when the second virtual prop is a virtual protection prop carried or worn by the virtual object, indication information that a protection attribute of the virtual protection prop changes.
In the foregoing solution, after the virtual item package including at least two first virtual items is presented in the screen of the virtual scene, the apparatus further includes:
a presentation canceling module, configured to present a countdown corresponding to the virtual item packet, and cancel presentation of the virtual item packet in a screen of a virtual scene when the countdown is zero; or,
when the virtual object throwing the virtual item package throws the virtual item package again, canceling the presentation of the virtual item package in the picture of the virtual scene; or,
when a virtual object which throws the virtual prop packet is attacked and died in a virtual scene, canceling to present the virtual prop packet in a picture of the virtual scene; or,
and when the first virtual item in the virtual item packet is picked up completely, canceling the presentation of the virtual item packet in the picture of the virtual scene.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the processing method of the virtual prop provided by the embodiment of the application when the executable instruction stored in the memory is executed.
The computer-readable storage medium stores executable instructions, and when the executable instructions are executed by a processor, the method for processing the virtual prop provided by the embodiment of the application is implemented.
The embodiment of the application has the following beneficial effects:
the attribute of the second virtual prop equipped by the virtual object can be changed by controlling the virtual object to pick up the first virtual prop in the virtual prop packet, and compared with a mode that the attribute of the virtual prop equipped by the virtual object can be changed by multiple or long-time interaction, the method reduces the interaction times required for achieving the interaction purpose, improves the human-computer interaction efficiency, reduces the occupation of hardware processing resources, and improves the interaction experience of a user in a virtual scene.
Drawings
Fig. 1A-1B are schematic application mode diagrams of a processing method for a virtual item according to an embodiment of the present application;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 3 is an optional schematic flow chart of a method for processing a virtual item according to an embodiment of the present application;
FIGS. 4A-4B are schematic diagrams of display interfaces provided by embodiments of the present application;
5A-5B are schematic diagrams of virtual reality interfaces provided by embodiments of the present application;
FIG. 6 is a schematic diagram of a display interface provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of a display interface provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a display interface provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a display interface provided in an embodiment of the present application;
fig. 10 is an optional flowchart of a method for processing a virtual item according to an embodiment of the present application;
fig. 11 is an optional flowchart schematic diagram of a processing method of a virtual item according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a processing device of a virtual item according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, reference is made to the term "first \ second …" merely to distinguish between similar objects and not to represent a particular ordering for the objects, it being understood that "first \ second …" may be interchanged in a particular order or sequence of orders as permitted to enable embodiments of the application described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) The client side, and the application program running in the terminal for providing various services, such as a video playing client side, an instant messaging client side, a live broadcast client side, and the like.
2) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
3) The virtual scene is a virtual scene displayed (or provided) when an application program runs on a terminal, and the virtual scene may be a simulation environment of a real world, a semi-simulation semi-fictional virtual environment, or a pure fictional virtual environment. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
For example, when the virtual scene is a three-dimensional virtual space, the three-dimensional virtual space may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. Of course, the virtual scene may also include virtual objects, for example, buildings, vehicles, or props such as weapons required for arming themselves or fighting with other virtual objects in the virtual scene, and the virtual scene may also be used to simulate real environments in different weathers, for example, weather such as sunny days, rainy days, foggy days, or dark nights. The user may control the movement of the virtual object in the virtual scene.
4) Virtual objects, the appearance of various people and objects in the virtual scene that can interact, or movable objects in the virtual scene. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in the virtual scene. The virtual object may be an avatar in the virtual scene that is virtual to represent the user. The virtual scene may include a plurality of virtual objects, each virtual object having its own shape and volume in the virtual scene and occupying a portion of the space in the virtual scene.
Alternatively, the virtual object may be a user Character controlled by an operation on the client, an Artificial Intelligence (AI) set in the virtual scene fight by training, or a Non-user Character (NPC) set in the virtual scene interaction. Alternatively, the virtual object may be a virtual character that is confrontationally interacted with in a virtual scene. Optionally, the number of virtual objects participating in the interaction in the virtual scene may be preset, or may be dynamically determined according to the number of clients participating in the interaction.
Taking a shooting game as an example, the user may control a virtual object to freely fall, glide, open a parachute to fall, run, jump, climb, bend over, and move on the land, or control a virtual object to swim, float, or dive in the sea, or the like, but the user may also control a virtual object to move in the virtual scene by riding a virtual vehicle, for example, the virtual vehicle may be a virtual car, a virtual aircraft, a virtual yacht, and the like, and the above-mentioned scenes are merely exemplified, and the present invention is not limited to this. The user can also control the virtual object to carry out antagonistic interaction with other virtual objects through the virtual prop, for example, the virtual prop can be a throwing type virtual prop such as a grenade, a beaming grenade and a viscous grenade, and can also be a shooting type virtual prop such as a machine gun, a pistol and a rifle, and the type of the virtual prop is not specifically limited in the application.
5) Scene data, representing various features that objects in the virtual scene are exposed to during the interaction, may include, for example, the location of the objects in the virtual scene. Of course, different types of features may be included depending on the type of virtual scene; for example, in a virtual scene of a game, scene data may include a time required to wait for various functions provided in the virtual scene (depending on the number of times the same function can be used within a certain time), and attribute values indicating various states of a game character, for example, a life value (also referred to as a red amount) and a magic value (also referred to as a blue amount), and the like.
The embodiment of the present application provides a method, an apparatus, an electronic device, and a computer-readable storage medium for processing a virtual item, which can diversify functions of picking up a virtual item and improve playability, and an exemplary application of the electronic device provided in the embodiment of the present application is described below. In the following, an exemplary application will be explained when the device is implemented as a terminal.
In order to facilitate easier understanding of the method for processing the virtual item provided in the embodiment of the present application, an exemplary implementation scenario of the method for processing the virtual item provided in the embodiment of the present application is first described, and the virtual scenario may be completely output based on a terminal, or output based on cooperation of the terminal and a server.
In some embodiments, the virtual scene may be a picture presented in a military exercise simulation, and a user may simulate a tactic, a strategy or a tactics through virtual objects belonging to different groups in the virtual scene, so that the virtual scene has a great guiding effect on the command of military operations. In some embodiments, the virtual scene may be an environment for game characters to interact with, for example, game characters to play against in the virtual scene, and the two-way interaction may be performed in the virtual scene by controlling the actions of the virtual objects, so that the user can relieve the life pressure during the game.
In an implementation scenario, referring to fig. 1A, fig. 1A is an application mode schematic diagram of the processing method for a virtual item provided in the embodiment of the present application, and is applicable to some application modes that can complete calculation of related data of a virtual scenario 100 completely depending on the computing capability of a terminal 400, for example, a game in a single-machine version/offline mode, and output of the virtual scenario is completed through the terminal 400 such as a smart phone, a tablet computer, and a virtual reality/augmented reality device.
When the visual perception of the virtual scene 100 is formed, the terminal 400 calculates and displays required data through the graphic computing hardware, completes the loading, analysis and rendering of the display data, and outputs a video frame capable of forming the visual perception on the virtual scene at the graphic output hardware, for example, a two-dimensional video frame is displayed on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of an augmented reality/virtual reality glasses; furthermore, to enrich the perception effect, the device may also form one or more of auditory perception, tactile perception, motion perception, and taste perception by means of different hardware.
As an example, the terminal 400 is installed and operated with a client 410 (e.g., a standalone game application) supporting a Virtual scene, which may be any one of a First-Person shooter game (FPS), a third-Person shooter game, a multiplayer Online tactical sports game (MOBA), a Virtual Reality (VR) application, a Three-dimensional (3D) map program, an Augmented Reality (AR) application, a military simulation program, or a multiplayer gunfight type live game. The user uses the terminal 400 to operatively control the virtual objects located in the virtual scene to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, picking, shooting, attacking, throwing. Illustratively, the virtual object is a virtual character, such as a simulated character or an animated character.
The virtual object 110 and the virtual item package 120 are included in the virtual scene, the virtual object 110 may be a game character controlled by a user (or called a player), that is, the virtual object 110 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice-controlled switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the virtual object 110 will move to the left in the virtual scene, and may also remain stationary in place, jump, and use various functions (such as skills and items); virtual items in virtual item package 120 may be picked up by virtual object 110, e.g., virtual object 110 picks up a first virtual item in virtual item package 120 and changes an attribute of a second virtual item provided by virtual object 110 based on the picked up first virtual item. Virtual item package 120 may be thrown by virtual object 110, or may be thrown by another object.
For example, in a shooting game application, a screen of a virtual scene 100 observed from a virtual scene from a perspective of a virtual object 110 is presented on a client 410, a virtual item package 120 containing at least two first virtual items is presented in the screen, a user controls the virtual object 110 to move towards the virtual item package 120 through the client 410, and when the virtual object 110 moves to a target area with a target position as a center and the virtual item package is in a pickable state, the virtual object 110 is controlled to pick up a first virtual item in the virtual item package 120, and attribute change indication information of a second virtual item equipped to the virtual object 110 caused by the first virtual item is presented.
By way of example, in a military virtual simulation application, virtual scene technology is adopted to enable a trainee to visually and aurally experience a battlefield environment and become familiar with environmental characteristics of an area to be battled, necessary equipment interacts with objects in the virtual environment, and the virtual battlefield environment can be realized by a corresponding three-dimensional battlefield environment graphic image library comprising a battle background, a battlefield scene, various weaponry, fighters and the like, and a dangerous image ring life and near real three-dimensional battlefield environment can be created through background generation and image synthesis. In actual implementation, when a terminal controls a virtual object 110 (such as a simulated fighter) to fight another object (such as a simulated enemy), a picture of a virtual scene 100 observed from the virtual scene from the perspective of the virtual object 110 is presented on a client 410, a virtual item package 120 containing at least two first virtual items is presented in the picture, a user controls the virtual object 110 to move towards the virtual item package 120 through the client 410, and when the virtual object 110 moves to a target area centered on a target position and the virtual item package is in a pickable state, the virtual object 110 is controlled to pick up a first virtual item in the virtual item package 120, and attribute change indication information of a second virtual item equipped with the virtual object 110 based on the first virtual item is presented.
In another implementation scenario, referring to fig. 1B, fig. 1B is an application mode schematic diagram of the processing method for a virtual item provided in this embodiment, which is applied to a terminal 400 and a server 200, and is generally applicable to an application mode that depends on the computing power of the server 200 to complete virtual scenario computation and output a virtual scenario at the terminal 400.
Taking the visual perception of forming the virtual scene 100 as an example, the server 200 performs calculation of display data related to the virtual scene and sends the calculated display data to the terminal 400, the terminal 400 relies on graphic calculation hardware to complete loading, analysis and rendering of the calculated display data, and relies on graphic output hardware to output the virtual scene to form the visual perception, for example, a two-dimensional video frame can be presented on a display screen of a smart phone, or a video frame realizing a three-dimensional display effect is projected on a lens of augmented reality/virtual reality glasses; for perception in the form of a virtual scene, it is understood that an auditory perception may be formed by means of a corresponding hardware output of the terminal, e.g. using a microphone output, a tactile perception using a vibrator output, etc.
As an example, the terminal 400 runs a client 410 (e.g., a network-based game application) installed and running with a supporting virtual scene, and performs game interaction with other users by connecting a game server (i.e., the server 200), the terminal 400 outputs the virtual scene 100, the virtual scene includes a virtual object 110 and a virtual prop package 120, the virtual object 110 can be a game character controlled by a user (or a player), that is, the virtual object 110 is controlled by a real user, and will move in the virtual scene in response to an operation of the real user on a controller (including a touch screen, a voice control switch, a keyboard, a mouse, a joystick, and the like), for example, when the real user moves the joystick to the left, the virtual object 110 will move to the left in the virtual scene, and can also remain stationary, jump, and use various functions (such as skills and props); virtual items in virtual item package 120 may be picked up by virtual object 110, e.g., virtual object 110 picks up a first virtual item in virtual item package 120 and changes an attribute of a second virtual item provided by virtual object 110 based on the picked up first virtual item. Virtual item package 120 may be thrown by virtual object 110, or may be thrown by another object.
For example, the client 410 presents a picture of the virtual scene 100 obtained by observing the virtual scene from the perspective of the virtual object 110, and presents the virtual item package 120 containing at least two first virtual items in the picture, when the user controls the virtual object 110 to move towards the virtual item package 120 through the client 410, the client 410 sends the real-time position information of the virtual object 110 to the server 200, the server 200 detects whether the virtual object 110 moves to the target area where the virtual item package 120 is located and whether the virtual item package 120 is in the pickable state according to the picking logic, when the virtual object 110 moves to the target area and the virtual item package 120 is in the pickable state, the detection result that the virtual item package 120 is in the pickable state is sent to the client 410, the client 410 receives the detection result and displays the virtual item package 120 in the pickable state through the target display style, the user controls virtual object 110 to pick up a first virtual item in virtual item package 120 based on the display style of virtual item package 120 in a pickable state, and presents attribute change indication information of a second virtual item equipped by virtual object 110 caused based on the first virtual item.
Referring to fig. 2, fig. 2 is an optional structural schematic diagram of an electronic device 500 provided in the embodiment of the present application, and in practical application, the electronic device 500 may be the terminal 400 in fig. 1A, or may also be the terminal 400 or the server 200 in fig. 1B, and a computer device for implementing the processing method of the virtual item in the embodiment of the present application is described with the electronic device being the terminal 400 shown in fig. 1A as an example. The electronic device 500 shown in fig. 2 includes: at least one processor 510, memory 550, at least one network interface 520, and a user interface 530. The various components in the electronic device 500 are coupled together by a bus system 540. It is understood that the bus system 540 is used to enable communications among the components. The bus system 540 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 540 in fig. 2.
The Processor 510 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 530 includes one or more output devices 531 enabling presentation of media content, including one or more speakers and/or one or more visual display screens. The user interface 530 also includes one or more input devices 532, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 550 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 550 optionally includes one or more storage devices physically located remote from processor 510.
The memory 550 may comprise volatile memory or nonvolatile memory, and may also comprise both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 550 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, memory 550 can store data to support various operations, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below.
An operating system 551 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 552 for communicating to other computing devices via one or more (wired or wireless) network interfaces 520, exemplary network interfaces 520 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 553 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 531 (e.g., a display screen, speakers, etc.) associated with the user interface 530;
an input processing module 554 to detect one or more user inputs or interactions from one of the one or more input devices 532 and to translate the detected inputs or interactions.
In some embodiments, the processing device of the virtual item provided in this embodiment may be implemented by software, and fig. 2 illustrates a processing device 555 of the virtual item stored in a memory 550, which may be software in the form of programs and plug-ins, and includes the following software modules: the first rendering module 5551, the motion control module 5552, the pickup control module 5553 and the second rendering module 5554 are logical and thus may be arbitrarily combined or further divided according to the functions implemented.
The functions of the respective modules will be explained below.
In other embodiments, the processing Device of the virtual prop provided in this embodiment may be implemented in hardware, and as an example, the processing Device of the virtual prop provided in this embodiment may be a processor in the form of a hardware decoding processor, which is programmed to execute the processing method of the virtual prop provided in this embodiment, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic elements.
Next, a description is given of a method for processing a virtual item provided in this embodiment, where in actual implementation, the method for processing a virtual item provided in this embodiment may be implemented by a server or a terminal alone, or may be implemented by a server and a terminal in a cooperation manner.
Referring to fig. 3, fig. 3 is an optional flowchart of a method for processing a virtual item provided in the embodiment of the present application, and the steps shown in fig. 3 will be described.
Step 101: the terminal presents a virtual item packet containing at least two first virtual items in a picture of a virtual scene.
In practical application, an application program supporting a virtual scene is installed on a terminal, when a user opens the application program on the terminal and the terminal runs the application program, the user can perform touch operation on the terminal, after the terminal detects the touch operation of the user, scene data of the virtual scene is acquired in response to the touch operation, a picture of the virtual scene is rendered based on the scene data of the virtual scene, and the rendered picture of the virtual scene is presented on the terminal.
Here, the frame of the virtual scene may be obtained by observing the virtual scene at a first person object viewing angle, or obtained by observing the virtual scene at a third person object viewing angle, where the frame of the virtual scene presents an interactive object and an object interactive environment in addition to the operation control presenting the target virtual item, for example, the virtual object and the target object in an opponent relationship interact with each other in the virtual scene.
In some embodiments, prior to presenting the virtual item package in the screen of the virtual scene, the presented virtual item package may also be thrown by:
presenting an operation control of the virtual item package; and when the operation control is in an activated state, controlling the virtual object to throw the virtual prop packet in response to the triggering operation aiming at the operation control.
Here, before the terminal presents the screen of the virtual scene or during the presentation of the screen of the virtual scene, the terminal may present a selection interface for selecting the virtual item, where the selection interface includes at least one operation control of the virtual item or the virtual item package, where the operation control is an icon corresponding to the virtual item or the virtual item package that can be used in the virtual scene, and the selection interface may occupy a display interface of the entire terminal or a partial display interface of the entire display interface of the terminal, and for example, the selection interface may be suspended on the screen of the virtual scene.
In practical application, only when the operation control is in an activated state, the virtual object can be controlled to trigger the operation control, the operation control of the virtual prop package can be activated in a time cooling mode, the cooling time of the operation control of different virtual prop packages is different, and generally speaking, the more powerful the virtual prop package is, the longer the cooling time of the corresponding operation control is. When the operation control of the virtual prop package is presented, a corresponding cooling time or required energy progress annular bar can be presented, and the operation control is activated when the countdown of the cooling time gradually returns to 0 or the energy progress annular bar gradually fills the annular as time goes by, wherein the display style of the operation control in the activated state is different from that of the operation control in the inactivated state.
Referring to fig. 4A-4B, fig. 4A-4B are schematic diagrams of a display interface provided in an embodiment of the present application, in fig. 4A, an operation control 401 in an inactive state is displayed in a grayscale display mode, and in fig. 4B, an operation control 402 in an active state is displayed in a highlight display mode.
In some embodiments, only the virtual object can throw the virtual item package in a legal state, but not throw the virtual item package in an illegal state, and correspondingly, when receiving the trigger operation for the operation control, the terminal needs to acquire the state of the virtual object; when the state of the virtual object represents that the virtual object is in a state capable of throwing the virtual item package, triggering and controlling the virtual object to throw the virtual item package; when the state of the virtual object represents that the virtual object is in a state that the virtual item packet cannot be thrown, indicating information indicating that the virtual item packet is thrown unsuccessfully is presented.
When the virtual item package is actually implemented, a corresponding legal state capable of being thrown or an illegal state incapable of being thrown is preset in the virtual item package, when a trigger operation aiming at an operation control is received, the state of the virtual object can be matched with the preset legal state of the virtual item package, when the matching is successful, the virtual object is determined to be in the state capable of throwing the virtual item package, the virtual item package is triggered and controlled to be thrown, when the matching is failed, the virtual object is determined to be in the state incapable of throwing the virtual item package, the virtual object is not allowed to throw the virtual item package, and indication information such as 'the virtual resource package cannot be thrown in the state' is presented; or when receiving a trigger operation aiming at the operation control, matching the state of the virtual object with a preset illegal state of the virtual prop packet, and when the matching is successful, determining that the virtual object is in a state that the virtual prop packet cannot be thrown, not allowing the virtual object to throw the virtual prop packet, and presenting indication information such as 'the virtual resource packet cannot be thrown in the state'; and when the matching fails, determining that the virtual object is in a state of throwing the virtual item package, and triggering and controlling the virtual object to throw the virtual item package.
For example, the illegal state that the virtual prop packet is preset and cannot be thrown includes: climbing, swimming, falling off, wing mounting, jumping, sprinting, sliding shovel and loading, wherein the terminal matches the state of the virtual object obtained when the terminal receives the trigger operation aiming at the operation control with the illegal state, when the matching is successful, namely the virtual object is in any illegal state, the virtual object is represented in the state that the virtual item package cannot be thrown, the virtual object is not allowed to throw the virtual item package, and the indication information indicating that the virtual item package fails to be thrown such as 'the virtual resource package cannot be thrown in the state' is presented.
In some embodiments, the virtual object may be controlled to throw the virtual item package by: acquiring a first position of a virtual object in a virtual scene and a second position which is a target distance away from the first position along the orientation of the virtual object; and when no barrier exists between the first position and the second position, controlling the virtual object to throw the virtual prop packet to the second position.
Here, a first position where the virtual object is located and a second position at a target distance in front of the virtual object are obtained, whether an obstacle (for example, an object such as a wall or an oil drum which hinders the virtual object from moving) or a non-navigation layer exists between the first position and the second position is judged, when it is determined that the obstacle does not exist between the first position and the second position, the virtual object is controlled to throw the virtual item package to the second position, when the obstacle exists between the first position and the second position, the virtual object is controlled to throw the virtual item package to the first position, that is, if the obstacle exists between the first position and the second position, the virtual object is controlled to throw the virtual item package to the position under the feet of the virtual object.
In some embodiments, the presence or absence of an obstacle between the first position and the second position may be detected by: transmitting a detection ray consistent with the direction along the second virtual prop from the first position of the second virtual prop to the second position by binding a camera component on the second virtual prop, wherein the camera component is equipped with a virtual object; whether an obstacle exists between the first position and the second position is determined based on the detection ray. When the detection ray intersects with a collider component (such as a collision box, a collision ball and the like) bound on the obstacle, determining that the obstacle exists between the first position and the second position; when the detection ray does not intersect the impactor assembly bound to the obstacle, it is declared that no obstacle is determined to be present between the first position and the second position.
In some embodiments, after the virtual object is controlled to throw the virtual item packet, a throwing position of the virtual item packet when the virtual object releases the virtual item packet can be further obtained; and when the throwing position of the virtual item packet does not meet the position throwing condition, presenting indication information indicating that the virtual item packet fails to throw. For example, if the throwing position is an illegal position such as a high wall, a high building, or the like, the indication information indicating that the virtual item packet fails to be thrown is presented, for example, "cannot be thrown at this point" depending on the fact that the throwing position does not satisfy the position throwing condition.
Step 102: and controlling the virtual object to move towards the virtual item package in response to the motion instruction for the virtual object in the picture.
In some embodiments, the user may control the virtual object to move, turn, jump, and the like in the virtual scene according to the virtual object motion instruction, the motion instruction of the virtual object is received through a picture of the virtual scene on the terminal so as to control the virtual object to move in the virtual scene, and the content presented in the picture of the virtual scene changes along with the motion of the virtual object during the motion, that is, the process of presenting the virtual object to move towards the virtual item package is performed.
In some embodiments, when the motion process of the virtual object in the virtual scene is displayed in the picture of the virtual scene, the field area of the viewing object is determined according to the viewing position and the field angle of the viewing object in the complete virtual scene; the part of the virtual scene in the field of view area of the virtual scene is presented, i.e. the displayed virtual scene may be a partial virtual scene relative to the panoramic virtual scene.
For example, taking a virtual reality device worn by a user as an example, referring to fig. 5A, fig. 5A is an interface schematic diagram of virtual reality provided in an embodiment of the present application, a viewing user (i.e., a real user) can perceive, through a lens, a part of a virtual scene 502 in a field area of a virtual scene 501 in the virtual reality device, a sensor for detecting a posture (e.g., a nine-axis sensor) is disposed in the virtual reality device, and is configured to detect a posture change of the virtual reality device in real time, if the user wears the virtual reality equipment, when the head posture of the user changes, the real-time head posture is transmitted to the processor, so that the fixation point of the sight line of the user in the virtual scene is calculated, an image in the three-dimensional model of the virtual scene at the user's gaze range (i.e. the field of view region) is computed from the gaze point, and the display screen displays the experience as if the user were watching in a real environment. For other types of virtual reality devices, such as a mobile virtual reality device (PCVR), the principles for implementing visual perception are similar to those described above, except that the PCVR, the mobile virtual reality device, and the like do not have their own processors integrated to implement the relevant calculations, and do not have the functionality of independent virtual reality inputs and outputs.
Taking the example that the user manipulates the virtual object 503 in the virtual scene, that is, the viewing user is the virtual object 503, referring to fig. 5B, fig. 5B is an interface schematic diagram of virtual reality provided in the embodiment of the present application, and the user controls the virtual object 503 to perform a movement operation, such as running, squatting, and the like, by controlling the viewing position and the viewing angle of the virtual object 503 in the complete virtual scene 501, and presents a movement process of the virtual object 503 in the virtual scene.
In some embodiments, a map thumbnail of the virtual scene may also be presented in the screen of the virtual scene; and in the map thumbnail, presenting the position information of the virtual object and the virtual item package, wherein the position information is used for controlling the virtual object to move towards the virtual item package based on the position information.
Referring to fig. 6, fig. 6 is a schematic diagram of a display interface provided in an embodiment of the present application, in fig. 6, a virtual item package 601 is presented in a screen of a virtual scene, where the virtual item package 601 is used for picking up a part of virtual objects (including a virtual object 603 and a virtual object belonging to the same team as the virtual object 603) of the virtual scene, or is used for picking up all virtual objects (including a virtual object 603, a virtual object belonging to the same team as the virtual object 603 and a virtual object not belonging to the same team as the virtual object 603) in the virtual scene, except that for a first virtual item of the same type in the virtual item package, if being picked up by a virtual object belonging to the same team as the virtual object 603, attributes of a virtual object 603 and a second virtual item equipped with a virtual object belonging to the same team as the virtual object 603 may be enhanced, if being picked up by a virtual object not belonging to the same team as the virtual object 603, attributes of a second virtual item provided by a virtual object that does not belong to the same team as virtual object 603 can be reduced; the terminal also presents a map thumbnail 602 (a guide map formed by abbreviating the virtual scene of the whole game), the relative position information of the virtual object and the virtual item package is presented in the map thumbnail 602, and the position of the target virtual item package can be checked through the map thumbnail 602, so that the virtual object is controlled to move to the area where the target virtual item package is located quickly.
Step 103: when the virtual object moves to the target area and the virtual item package is in a pickable state, the virtual object is controlled to pick up a first virtual item in the virtual item package.
Typically, a virtual item package is valid only for a period of time after being thrown, e.g., 120 seconds after the virtual item package is thrown, during which time the virtual item package is in a pickable state; otherwise, the virtual item package is in an unavailable state.
In some embodiments, after controlling the virtual object to throw the virtual item packet, or after controlling the virtual object to move towards the virtual item packet, in the process of controlling the virtual object to move towards the virtual item packet, the terminal may further display the virtual item packet by:
when the virtual item package is in a pickup state, displaying the virtual item package by adopting a first display style; and when the virtual item package is in the non-picking state, displaying the virtual item package by adopting a second display style different from the first display style.
The first display pattern is used for representing that the virtual item package is in a pickable state, and the virtual item package in the pickable state is displayed through display patterns such as highlight, flash, the virtual item package in an open state, or perspective blue-and-white edge lighting; the second display pattern is used for representing that the virtual item package is in an unreceivable state, and the virtual item package in the unreceivable state is displayed by the display patterns such as the virtual item package in a closed state or no perspective blue-and-white edge lighting.
Referring to fig. 8, fig. 8 is a schematic view of a display interface provided in the embodiment of the present application, in fig. 8, when a virtual item package is in a pickable state, a first virtual item 801 in the virtual item package is displayed in a perspective display manner, or when a virtual object is controlled to pick up the first virtual item 801 in the virtual item package, the first virtual item 801 is displayed in a perspective manner.
In some embodiments, the target area is a range for the virtual item package to be picked up, that is, the virtual object is likely to pick up the first virtual item in the virtual item package only when moving to the target area. When the virtual object is controlled to throw the virtual item package to the target location, that is, the target virtual item package is located at the target location, the target area may be at least one of the following areas: a circular area with the target position as the center and the preset distance as the radius, a rectangular, square or other area with the target position as the center, and a fan-shaped area (such as a fan-shaped area within 3 meters and 120 degrees in front of the virtual prop package) formed by selecting a preset angle from the circular area with the target position as the vertex and the preset distance as the radius. In practical implementation, a collision device component (such as a collision box, a collision ball, etc.) can be bound at a peripheral position of the target area, whether the virtual object moves to the target area can be determined by detecting whether the virtual object collides with the collision device component, and when the virtual object passes through the collision device component, the virtual object can be determined to move to the target area.
In some embodiments, controlling the virtual object to pick up the first virtual item in the virtual item package may be implemented as follows: presenting a pick-up function icon of the virtual item package; and in response to the triggering operation for the picking function icon, controlling the virtual object to pick up the first virtual item in the virtual item package.
Here, when the virtual object moves to the target area and the virtual item package is in a pickable state, the user may trigger a pickup instruction for the virtual item package by triggering the pickup function icon, and the terminal controls the virtual object to pick up the first virtual item of the virtual item package in response to the pickup instruction triggered by the trigger operation.
In some embodiments, when the virtual object moves to the target area and the virtual item package is in the pickable state, controlling the virtual object to pick up the first virtual item in the virtual item package by the following steps: when the target area can also be a sensing area for sensing the virtual object by the virtual item package, the terminal also presents the sensing area; when the virtual object moves to the sensing area and the virtual prop package is in a pickup state, the virtual object is triggered to pick up a first virtual prop in the virtual prop package through the sensing action of the virtual prop package.
In actual implementation, an inductor can be bound on the virtual prop package, the induction area induced by the inductor is taken as a target area, at the moment, a target display style (such as highlight) can be sampled to display the induction area so as to be different from a non-induction area, when the virtual object moves to the induction area and the virtual prop package is in a pickable state, the induction action of the inductor bound on the virtual prop package triggers the virtual object to pick up a first virtual prop in the virtual prop package, namely the virtual object moves to the target area and the virtual prop package is in the pickable state, the virtual object automatically picks up the first virtual prop in the virtual prop package, and due to the strong sensitivity of the induction action, once the virtual object moves to the target area, the first virtual prop of the virtual prop package can be quickly picked up without the triggering operation of a user, so that the picking up accuracy and efficiency of the first virtual prop in the virtual prop package are improved, the user experience is improved.
It should be noted that, in the process of controlling the virtual object to move to the target area, the terminal may present the whole process of moving the virtual object to the target area, may also present a partial process of moving the virtual object to the target area, and may also present only the result of moving the virtual object to the target area.
When the virtual object moves to the target area and the virtual item package is in a pickable state, if only one type of the first virtual item of the virtual item package is available and the number of the first virtual items of the type is multiple, the virtual object can be controlled to pick up a preset number (for example, 2) of the first virtual items at most, and similarly, other virtual objects in the same team as the virtual object also pick up a preset number of the first virtual items at most.
In some embodiments, the virtual object may be controlled to pick up the first virtual item in the package of virtual items by:
when the types of the first virtual props in the virtual prop package comprise at least two types, displaying the at least two types of the first virtual props in different display modes; and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop.
The types of the first virtual props in the virtual prop package are various, for example, the types of the first virtual props include a throwing type, a defending type, an attacking type, a carrying type and the like, each type of the first virtual props corresponds to a display mode, the type of the second virtual props equipped by the virtual objects is obtained, the type of the second virtual props is matched with the types of the plurality of first virtual props, and the virtual objects are controlled to pick up the first virtual props of target types matched with the types of the second virtual props.
Referring to fig. 7, fig. 7 is a schematic view of a display interface provided in the embodiment of the present application, in fig. 7, a virtual item package 701 includes three types of first virtual items, such as a first virtual item 702 in a vehicle class, a first virtual item 703 in an attack class, and a first virtual item 704 in a defense class, and the three types of first virtual items are displayed in three different display manners to be distinguished, so as to control a virtual object 705 to pick up the first virtual item adapted to the type of the equipped second virtual item.
For example, if the second virtual item equipped by the virtual object is a vehicle class, the virtual object is controlled to pick up the first virtual item 702 of the vehicle class, so as to change the attribute of the second virtual item based on the picked-up first virtual item 702, so that the vehicle opening speed of the virtual object is faster; for another example, if the second virtual prop equipped by the virtual object is an attack class, the virtual object is controlled to pick up the first virtual prop 703 of the attack class, so as to change the attribute of the second virtual prop based on the picked-up first virtual prop 703, so that the lethality of the virtual object using the second virtual prop is stronger; for another example, if the second virtual item equipped by the virtual object is a defense class, then controlling the virtual object 705 to pick up the first virtual item 704 of the defense class, so as to change the attribute of the second virtual item based on the picked up first virtual item, so that the defense capability of the second virtual item is stronger, and so on.
In some embodiments, the virtual object may be controlled to pick up a first virtual item of a target type that matches the type of a second virtual item by:
when the types of the second virtual props comprise at least two types, obtaining the type of the second virtual prop with the highest consumption degree in the at least two types of the second virtual props; and controlling the virtual object to pick up the first virtual prop of the target type matched with the type of the second virtual prop with the highest consumption degree.
The consumption degree is used for representing the consumption or damage degree of the second virtual prop, and the consumption degree is higher, and the consumption degree of the second virtual prop is higher. In practical applications, the types of the second virtual props equipped by the virtual objects are also various, for example, the types of the second virtual props are defense classes, attack classes, carrier classes, and the like, and since the number of the first virtual props that can be picked up by each virtual object is limited, the consumption degree of each type of the second virtual props can be determined first, and the virtual objects are controlled to select and pick up the first virtual props that are adapted to the type of the second virtual object with the highest consumption degree, so as to enhance the performance of the second virtual props with the highest consumption degree, and thus the overall level of the second virtual props equipped by the virtual objects is improved.
For example, in fig. 7, of the second virtual items equipped in the virtual object 705, the consumption degree of the second virtual item in the defense class is 95%, the consumption degree of the second virtual item in the attack class is 50%, and the consumption degree of the second virtual item in the vehicle class is 0, and it is known that the type of the second virtual item with the highest consumption degree is the defense class, the virtual object is controlled to pick up the first virtual item in the defense class 704, and when the virtual object has the right to pick up other second virtual items, the virtual object is controlled to pick up the first virtual item in the attack class 703.
In some embodiments, the virtual object may also be controlled to pick up a first virtual item of a target type that matches the type of a second virtual item by:
when the types of the second virtual props comprise at least two types, obtaining the type of the second virtual prop with the highest preference degree in the at least two types of second virtual props based on the use preference of the virtual object for each second virtual prop; and controlling the virtual object to pick up the first virtual prop of the target type matched with the type of the second virtual prop with the highest preference degree.
For example, referring to fig. 7, the virtual item package 701 includes a first virtual item 702 in a vehicle class, a first virtual item 703 in an attack class, and a first virtual item 704 in a defense class, and predicts a usage preference of the virtual object 705, that is, a preference degree of the virtual object 705 for various types of second virtual items, by using a neural network model in combination with a type of a second virtual item historically used by the virtual object 705. If the type of the second virtual item with the highest preference degree is determined to be the vehicle class, the virtual object 705 is controlled to pick up the first virtual item 702 of the vehicle class, and since the second virtual item with the highest preference degree represents that the virtual object is most likely to use the corresponding second virtual item, changing the attribute of the second virtual object, such as enhancing the most likely use of the virtual object 705, is more beneficial to improving the fighting capability of the virtual object 705.
In some embodiments, the virtual object may also be controlled to pick up the first virtual item in the virtual item package by:
when the types of the first virtual props in the virtual prop package comprise at least two types, presenting the lethality index of each first virtual prop in the virtual prop package aiming at the target virtual object; and controlling the virtual object to pick up the first virtual prop with the highest lethality index.
For example, referring to fig. 7, the virtual item package 701 includes a first virtual item 702 of a vehicle class, a first virtual item 703 of an attack class, and a first virtual item 704 of a defense class, which may respectively present corresponding injury indicators (e.g., combat values, defense values, etc.), or only display the first virtual item with the highest injury indicator in a target display style, for example, if the injury of the first virtual item 702 of the vehicle class is strongest, only present a pickable control of the first virtual item 702 of the vehicle class in a screen of a virtual scene, and in response to a trigger operation for the pickable control, control the virtual object 705 to pick up the first virtual item 702 of the vehicle class. And controlling the virtual object to select and pick the first virtual prop with the highest killing power from the plurality of first virtual props through the killing power index of each first virtual prop, so that the virtual object picks up the valuable first virtual prop, and the fighting capacity of the virtual object is improved.
In some embodiments, the virtual object may also be controlled to pick up the first virtual item in the virtual item package by:
when the types of the first virtual props in the virtual prop package comprise at least two types, presenting the times of picking up each first virtual prop; and controlling the virtual object to pick up the first virtual prop with the maximum picking-up times.
For example, referring to fig. 7, the virtual item package 701 includes a first virtual item 702 of a vehicle class, a first virtual item 703 of an attack class, and a first virtual item 704 of a defense class, which may respectively present the number of times that each first virtual item is picked up by other virtual objects (other virtual objects except for the virtual object 705), and may also display only the first virtual item with the maximum number of picking up times in a target display style, for example, if the number of picking up times of the first virtual item 702 of the vehicle class is maximum, only the pickable control of the first virtual item 702 of the vehicle class is presented in the screen of the virtual scene, and the virtual object 705 is controlled to pick up the first virtual item 702 of the vehicle class in response to a trigger operation for the pickable control. The virtual object is controlled to select the first virtual prop with the maximum picking frequency from the plurality of first virtual props through the picking frequency of each first virtual prop, so that the virtual object picks the valuable first virtual prop, and the fighting capacity of the virtual object is improved.
In some embodiments, the virtual object may also be controlled to pick up the first virtual item in the virtual item package by:
when the types of the first virtual props in the virtual prop package comprise at least two types, displaying the at least two types of the first virtual props in different display modes; acquiring the assigned roles of the virtual objects in the team; and controlling the virtual object to pick up the first virtual prop of the target type matched with the role.
For example, referring to fig. 7, the virtual item package 701 includes a first virtual item 702, a first virtual item 703 and a first virtual item 704, the types of the first virtual item are various, for example, the type of the first virtual item includes a shooting type, a throwing type, a defense type, an attack type, etc., the first virtual item 702 is a virtual item of a vehicle type, the first virtual item 703 is a virtual item of a shooting type, the first virtual item 704 is a virtual item of a defense type, the first virtual item of the corresponding type can be displayed in different display manners, or only a pickable control of the first virtual item of a type adapted to the role of the virtual object 705 can be displayed, if the assigned role of the virtual object 705 in the team is a shooter, the type of the first virtual item 703 is adapted to the role of the virtual object 705, only the pickable control of the first virtual item 703 is presented, in response to a trigger operation for the pickable control, control virtual object 705 picks up first virtual prop 703. The allocated roles of the virtual objects in the team are used for controlling the virtual objects to screen and pick the first virtual props matched with the roles from the plurality of first virtual props, so that the virtual objects pick the first virtual props matched with the roles of the virtual objects, and the fighting capacity of the virtual objects is improved.
In some embodiments, when the virtual object picking up the virtual item package and the virtual object throwing the virtual item package are not in a team, for example, the first virtual object and the second virtual object do not belong to the same team and are in a fighting relationship, when the second virtual object picks up the virtual item package thrown by the first virtual object, the types of the first virtual item in the virtual item package are multiple, the types of the second virtual item equipped with the second virtual object are also multiple, and the pickable control of the first virtual item which is not needed by the second virtual object can be displayed in a target display mode, for example, the consumption of each type of second virtual item equipped with the second virtual object is determined first, and the pickable control of the first virtual item which is adapted to the type of the second virtual object with the lowest consumption is highlighted in a target display mode, so that, the first virtual prop presented is the virtual prop which is not needed by the second virtual object, so that the enemy cannot easily pick up the first virtual prop which is needed most, the fighting capacity of the enemy is reduced, and a powerful fighter is provided for the enemy.
Step 104: and presenting attribute change indication information of the second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
In some embodiments, attribute change indicating information for the second virtual prop may be presented by: and presenting the attribute change special effect of the second virtual item, wherein the attribute change special effect is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
For example, if the second virtual item is a defense-type virtual item, such as body armor, after the virtual object is controlled to pick up the first virtual item, the protection state of the body armor equipped by the virtual object is enhanced, such as 5%/second or 10%/second durable repair capability is obtained, and the state lasts for a period of time, such as 5 seconds, and meanwhile, the property change special effect of the body armor is presented, such as the body armor glittering while being gold-painted.
In some embodiments, before the control virtual object picks up the first virtual item in the virtual item package, the terminal presents an attribute value of the second virtual item, the attribute value being the first attribute value; correspondingly, after the control virtual object picks up the first virtual item in the virtual item package, the terminal presents that the attribute value of the second virtual item changes from the first attribute value to the second attribute value.
For example, in fig. 8, before the virtual object is controlled to pick up the first virtual item in the virtual item package, a blood bar icon 802 representing a first attribute value of the second virtual item is presented, and after the virtual object is controlled to pick up the first virtual item in the virtual item package, referring to fig. 9, fig. 9 is a display interface schematic diagram provided in the embodiment of the present application, in fig. 9, the blood bar icon 802 representing the first attribute of the second virtual item changes to a blood bar icon 901 of the second attribute, and the blood bar icon promotes a lattice, representing that the attribute value of the second virtual item changes based on the picked up first virtual item.
In some embodiments, attribute change indicating information for the second virtual prop may also be presented by: and when the second virtual prop is a virtual protection prop carried or worn by the virtual object, presenting indication information that the protection attribute of the virtual protection prop changes.
For example, the second virtual item is body armor, and after the virtual object is controlled to pick up the first virtual item in the virtual item package, indication information such as "number of protection layers + 1" is presented in the picture of the virtual scene to represent that the number of protection layers of the body armor has changed based on the picked up first virtual item.
In some embodiments, after presenting a virtual item package containing at least two first virtual items in the screen of the virtual scene, presenting the virtual item package in the screen of the virtual scene may also be cancelled by:
presenting countdown of the corresponding virtual item packet, and when the countdown is zero, canceling to present the virtual item packet in a picture of a virtual scene; or when the virtual object throwing the virtual item package throws the virtual item package again, canceling the presentation of the virtual item package in the picture of the virtual scene; or when the virtual object throwing the virtual item package is attacked and died in the virtual scene, the virtual item package is cancelled to be presented in the picture of the virtual scene; or when the first virtual item in the virtual item package is picked up completely, the virtual item package is cancelled to be presented in the picture of the virtual scene.
Here, in a general case, the virtual item package is valid only for a period of time after being thrown, for example, the virtual item package is valid for 120 seconds after being thrown, and when the countdown time corresponding to the virtual item package is zero, the virtual item package automatically disappears in the virtual scene picture; when the virtual object throwing the virtual item package throws the virtual item package again, the virtual item package thrown before can be destroyed; when the virtual object throwing the virtual item package dies, the virtual item package disappears along with the death of the virtual object; and when the virtual item package contains a preset number of first virtual items and all the first virtual items are completely picked up, canceling the presentation of the virtual item package.
Next, an exemplary application of the processing method of the virtual item provided in the embodiment of the present application in an application scenario of a game will be described.
In the related art, in a mobile terminal shooting game, a life value of a virtual object (blood volume added to the virtual object) can be increased by picking up a virtual prop such as a medical kit, but the method can only change the attribute of the virtual object and cannot change the attributes of other virtual props equipped by the virtual object, so that the function of picking up the virtual prop is single and the playability is not strong; meanwhile, in order to change the attribute of the virtual prop equipped by the user or the other party, multiple or long-time interaction is often required, and the human-computer interaction efficiency is low.
Therefore, the application provides a processing method of the virtual props, the virtual props are thrown by controlling the virtual objects, other virtual objects in a team where the virtual objects are located pick up first virtual props (namely, parts in a refitted chip or a part package) in the virtual props package (namely, the refitted chip or the part package), and attribute change indication information of second virtual props (such as body armor, a carrier and the like) equipped by the virtual objects is presented based on the picked-up first virtual props, so that the attributes of the other second virtual props equipped by the virtual objects can be changed by picking up the first virtual props, the functions of picking up the virtual props are diversified, and the playability is improved; meanwhile, compared with a mode that the attributes of the virtual props equipped by the virtual objects can be changed only by interaction for many times or a long time, the method can change the attributes of other virtual props equipped by the virtual objects only by controlling the virtual objects to pick up the virtual props, reduces the interaction times required for achieving the interaction purpose, and improves the human-computer interaction efficiency.
Referring to fig. 10, fig. 10 is an optional flowchart of a method for processing a virtual item provided in the embodiment of the present application, and the step shown in fig. 10 will be described in detail.
Step 201: and the terminal displays the operation control of the virtual item package in a gray scale in the picture of the virtual scene.
Here, after the user opens the application program of the virtual scene on the terminal, a selection interface including at least one virtual item is presented in the screen of the virtual scene, the user can select a virtual item package, such as a modified chip, from a plurality of virtual items, and when the user selects the modified chip, the terminal presents the operation control of the modified chip in response to the selection operation.
In general, the operation control of the modified chip that has just been selected to enter the virtual scene is not available by default, that is, the operation control of the modified chip that has just been selected to enter the virtual scene is in an inactive state. When a game is started, the terminal sends a data acquisition request to the server, the server acquires and returns the cooling time of the control to the terminal based on the data request, the cooling time or the required energy progress annular bar corresponding to the operation control switched from the inactivated state to the activated state is displayed on the terminal, and the countdown of the cooling time gradually returns to zero or the energy progress annular bar gradually fills the ring along with the lapse of time.
Step 202: and highlighting the operation control of the virtual item package in the activated state.
Here, when the cooling time countdown of the operation control of the retrofit chip is zeroed, or the energy progress ring bar becomes full, the operation control is activated and highlighted in the frame of the virtual scene.
Step 203: and controlling the virtual object to throw the virtual prop packet in response to the triggering operation aiming at the operation control.
Here, when the operation control of the virtual item package is triggered, the model (such as a part package) of the modified chip is cut out, and the virtual object is controlled to drag the model of the modified chip and throw the model onto the ground.
In practical application, only the virtual object is in a legal state and the chip can be thrown and modified but not in an illegal state, and correspondingly, when the trigger operation aiming at the operation control is received, the terminal needs to acquire the state of the virtual object; when the state of the virtual object represents that the virtual object is in a state capable of throwing the modified chip, triggering and controlling the virtual object to throw the modified chip; and when the state of the virtual object represents that the virtual object is in a state that the modified chip cannot be thrown, presenting indication information indicating that the modified chip fails to be thrown.
For example, the illegal state that the modified chip is preset to be not thrown includes: climbing, swimming, falling off, wing mounting, jumping, sprinting, sliding shovel and loading, wherein the terminal matches the state of the virtual object obtained when the terminal receives the trigger operation aiming at the operation control with the illegal state, when the matching is successful, namely the virtual object is in any illegal state, the virtual object is represented to be in a state that the modified chip cannot be thrown, the virtual object is not allowed to throw the modified chip, and indication information such as 'the modified chip cannot be thrown in the state' to indicate that the throwing fails is presented.
In the process of controlling the virtual object to throw the modified chip, the key of the modified chip is not cancelled, but the throwing can be actively interrupted by adjusting the posture of the virtual object, for example, the throwing of the modified chip is interrupted by adjusting the posture of the virtual object to be at least one of the illegal states, and indication information such as 'no throwing in the state' or 'chip skill is interrupted' is presented. If after the virtual object is controlled to throw the modified chip, the throwing position of the modified chip can be obtained when the virtual object releases the modified chip, and when the throwing position of the modified chip does not meet the position throwing condition, indicating information indicating that the modified chip fails to throw is presented, for example, if the throwing position is an illegal position such as a high wall, a tall building and the like, indicating information indicating that the throwing fails cannot be thrown at the position is presented according to the condition that the throwing position does not meet the position throwing condition.
After the terminal controls the virtual object to throw the modified chip to the target position, the virtual object throwing the modified chip or other virtual objects in the same team with the virtual object can pick up parts in the modified chip (or part package) in a target area with the target position as a reference point, wherein the target area can be at least one of the following areas: a circular area with the target position as the center and the preset distance as the radius, a rectangular, square or other area with the target position as the center, and a fan-shaped area (such as a fan-shaped area within 3 meters and 120 degrees in front of the virtual prop package) formed by selecting a preset angle from the circular area with the target position as the vertex and the preset distance as the radius.
Step 204: and controlling the virtual object to move towards the virtual item package in response to the motion instruction for the virtual object in the picture.
Here, a map thumbnail of the virtual scene may also be presented in the screen of the virtual scene, and location information of the conversion chip is presented in the map thumbnail to control the virtual object to move toward the conversion chip based on the location information.
Step 205: when the virtual object moves to the target area and the virtual item package is in a pickable state, the virtual object is controlled to pick up a first virtual item in the virtual item package.
Here, when the retrofit chip is in a pickable state, the retrofit chip is displayed in a first display style; displaying the retrofit chip with a second display style different from the first display style when the retrofit chip is in the non-pickable state. The first display mode is used for representing that the modified chip is in a pickable state, and the modified chip in the pickable state is displayed through display modes such as highlighting, flashing, opening of a part package of a modified chip object, or lighting with a perspective blue edge and the like; the second display mode is used for representing that the modified chip is in the non-pickable state, and if the part package corresponding to the modified chip is in the closed state or the part package without the perspective blue-hooked edge emits light and the like, the modified chip in the non-pickable state is displayed.
In the process of picking up the parts in the modified chip by the virtual object, the parts to be picked up can be displayed in a perspective display mode, if the virtual object is in a one-hand gun holding state at the moment, only waist shooting but not aiming shooting can be carried out in the one-hand gun holding process, and firing can not be carried out in response to the firing of a microscope until the one-hand state is finished; if the virtual object actively picks up the part in the open mirror state, the open mirror is cancelled, and the virtual object is controlled to pick up the part by one hand.
When a control aiming at the modified chip is triggered, loading a prefabricated part (Prefab) file of the modified chip, when the virtual object is controlled to throw the modified chip, creating a part package capable of being picked up at a throwing position, when the virtual object moves towards the part package, judging whether the virtual object is in a target area capable of being picked up, if the virtual object is in the target area capable of being picked up, controlling the virtual object to pick up parts in the part package, in the picking up process, a main weapon does not need to be cut away, only a process of picking up the parts in the part package needs to be presented, for example, a process of picking up the parts by one hand through the virtual object is presented, and a part model in the part package flies into the hand of the virtual object. Because the main weapon is not cut away in the picking process, the parts can be changed into firing by picking so as to prevent the parts from being shot by enemies in the picking process, and the parts package is automatically destroyed after the parts in the parts package are picked.
Step 206: and presenting attribute change indication information of a second virtual item equipped by the virtual object based on the picked first virtual item.
Here, the attribute change indication information is used to indicate that the attribute of the second virtual item provided by the virtual object changes, for example, if the second virtual item is a defense-type virtual item, such as body armor, the protection state of the body armor provided by the virtual object is enhanced, such as obtaining 5%/second or 10%/second durable repair capability, and lasting for a period of time, such as 5 seconds, and simultaneously presenting the special effect of property change of the body armor, such as flashing the body armor while tracing, and also presenting indication information, such as "number of protection layers + 1", and once obtaining one part, can reduce the damage of non-head bullets by 3%/5%, and at most there are 3 layers, and each time a bullet is damaged, the effect can be reduced by one layer.
Referring to fig. 11, fig. 11 is an optional flowchart of a method for processing a virtual item provided in the embodiment of the present application, and the steps shown in fig. 11 will be described.
Step 401: the client sends a release request aiming at the modified chip to the server.
The terminal is provided with a game client, when a user opens the client and runs a game, and when the user clicks the refitted chip in an activated state, a releasing request of the refitted chip is sent to the server, wherein the releasing request carries the identification of the refitted chip and the identification of the user.
Step 402: the server releases the retrofit chip based on the release request.
Here, after receiving the release request, the server parses the release request, judges whether the user has the permission to use the modified chip based on the user identifier and the modified chip identifier, and releases the modified chip when determining that the user has the permission to use the modified chip.
Step 403: the client sends a switching-out request for the part package of the modified chip to the server.
The switching-out request carries a modified chip identifier and a user identifier, the modified chip and the part package have a corresponding relation, and the server switches out the corresponding part package and releases the part package to the terminal based on the user identifier and the modified chip identifier in the switching-out request.
Step 404: the server cuts out the part package based on the cut-out request.
Step 405: the client controls the virtual object to throw the part package to the target position.
Here, the control virtual object takes out the modified chip model, that is, the part package, and throws the part package to a position on the ground.
Step 406: the server sends a creation protocol for creating the part package at the target location to the client.
Step 407: the client presents the part package at the target location based on the creation protocol.
Step 408: when the virtual object moves to the parts package, the client sends a pick request to the server.
Step 409: the server sends a pickable protocol to the client.
Here, the pick-up request carries position information of the virtual object, the server judges whether the virtual object can pick up the part package based on the position information of the virtual object in the pick-up request, and when the virtual object moves to a part package pickable range, the pickable protocol is sent to the client.
Step 410: and after receiving the pickable protocol, the client controls the virtual object to pick up the parts in the part package and sends the picked parts to the server.
Step 411: and the server modifies the corresponding equipment attribute equipped by the virtual object according to the picked part and sends the modified equipment attribute to the client.
Here, the equipment is the second virtual prop described above.
Step 412: the client presents equipment attribute change indication information.
Through the mode, the part package is thrown through controlling the virtual object, parts in the part package are picked up through the virtual object and teammates, attribute values of various equipment systems equipped by the virtual object are improved, and in the picking-up process, an enemy can be alerted or the enemy can be swept away.
In some embodiments, as shown in fig. 12, fig. 12 is a schematic structural diagram of a processing device of a virtual item provided in this application embodiment, and the software modules stored in processing device 555 of the virtual item in memory 550 may include:
a first presentation module 5551, configured to present, in a screen of a virtual scene, a virtual item package including at least two first virtual items;
a motion control module 5552, configured to control a motion of a virtual object in the screen towards the virtual item package in response to a motion instruction for the virtual object;
a pickup control module 5553, configured to control the virtual object to pick up a first virtual item in the virtual item packets when the virtual object moves to a target area and the virtual item packets are in a pickable state;
a second presenting module 5554, configured to present, based on the picked up first virtual item, attribute change indication information of the second virtual item, where the attribute change indication information is used to indicate that an attribute of the second virtual item equipped by the virtual object changes.
In some embodiments, before presenting a virtual item package including at least two first virtual items in a screen of a virtual scene, the apparatus further comprises:
the throwing control module is also used for presenting the operation control of the virtual prop packet;
when the operation control is in an activated state, the virtual object is controlled to throw the virtual prop packet in response to a trigger operation for the operation control.
In some embodiments, the throwing control module is further configured to obtain a first position of the virtual object in the virtual scene and a second position that is a target distance from the first position along the orientation of the virtual object;
when no barrier exists between the first position and the second position, controlling the virtual object to throw the virtual prop packet to the second position.
In some embodiments, before said controlling said virtual object to throw said virtual item package, said apparatus further comprises:
the trigger control module is used for acquiring the state of the virtual object;
when the state of the virtual object represents that the virtual object is in a state capable of throwing the virtual item package, triggering and controlling the virtual object to throw the virtual item package;
when the state of the virtual object represents that the virtual object is in a state that the virtual item packet cannot be thrown, presenting indication information indicating that the virtual item packet is thrown unsuccessfully.
In some embodiments, after said controlling said virtual object to throw said virtual item package, said apparatus further comprises:
the third presentation module is used for acquiring a throwing position of the virtual item package when the virtual object releases the virtual item package;
and when the throwing position of the virtual prop packet does not meet the position throwing condition, presenting indication information indicating that the virtual prop packet fails to throw.
In some embodiments, the apparatus further comprises:
a fourth presentation module for presenting a map thumbnail of the virtual scene;
and presenting the position information of the virtual object and the virtual item package in the map thumbnail, wherein the position information is used for controlling the virtual object to move towards the virtual item package based on the position information.
In some embodiments, after the controlling the virtual object to move towards the virtual item package, the first presentation module is further configured to display the virtual item package in a first display style when the virtual item package is in a pickable state;
and when the virtual prop package is in a non-picking state, displaying the virtual prop package in a second display style different from the first display style.
In some embodiments, when the virtual object moves to a target area and the virtual item package is in a pickable state, the pickup control module is further configured to present the sensing area when the target area is a sensing area where the virtual item package is used for sensing a virtual object;
when the virtual object moves to the sensing area and the virtual prop package is in a pickup state, the virtual object is triggered to pick up a first virtual prop in the virtual prop package through the sensing action of the virtual prop package.
In some embodiments, the pick-up control module is further configured to present a pick-up function icon of the virtual item package;
and in response to the triggering operation aiming at the picking-up function icon, controlling the virtual object to pick up a first virtual item in the virtual item package.
In some embodiments, the pickup control module is further configured to, when the types of the first virtual items in the virtual item package include at least two types, display the at least two types of first virtual items in different display manners;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop.
In some embodiments, the pickup control module is further configured to, when the types of the second virtual props include at least two types, obtain a type of a second virtual prop with a highest consumption degree among the at least two types of second virtual props;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop with the highest consumption degree.
In some embodiments, the pickup control module is further configured to, when the types of the second virtual props include at least two types, obtain, based on usage preferences of the virtual object for each of the second virtual props, a type of a second virtual prop with a highest preference among the at least two types of second virtual props;
and controlling the virtual object to pick up the first virtual prop of the target type matched with the type of the second virtual prop with the highest preference degree.
In some embodiments, the pickup control module is further configured to present, when the types of the first virtual props in the virtual prop packages include at least two types, a lethality index of each of the first virtual props in the virtual prop packages for a target virtual object;
and controlling the virtual object to pick up the first virtual prop with the highest lethality index.
In some embodiments, the pickup control module is further configured to present the number of times each first virtual item is picked up when the types of the first virtual item in the virtual item package include at least two types;
and controlling the virtual object to pick up the first virtual prop with the maximum picking-up times.
In some embodiments, the pickup control module is further configured to, when the types of the first virtual items in the virtual item package include at least two types, display the at least two types of first virtual items in different display manners;
acquiring the assigned role of the virtual object in the team;
and controlling the virtual object to pick up a first virtual prop of a target type matched with the role.
In some embodiments, the second presenting module is further configured to present an attribute change special effect of the second virtual item, where the attribute change special effect is used to indicate that an attribute of the second virtual item equipped with the virtual object changes.
In some embodiments, before the controlling the virtual object to pick up the first virtual item in the virtual item package, the second presentation module is further configured to present an attribute value of the second virtual item, where the attribute value is the first attribute value;
correspondingly, after the virtual object is controlled to pick up the first virtual item in the virtual item package, the second presentation module is further configured to present that the attribute value of the second virtual item is changed from the first attribute value to the second attribute value.
In some embodiments, the second presenting module is further configured to present, when the second virtual prop is a virtual protection prop carried or worn by the virtual object, indication information that a protection attribute of the virtual protection prop changes.
In some embodiments, after presenting a virtual item package including at least two first virtual items in a screen of a virtual scene, the apparatus further comprises:
a presentation canceling module, configured to present a countdown corresponding to the virtual item packet, and cancel presentation of the virtual item packet in a screen of a virtual scene when the countdown is zero; or,
when the virtual object throwing the virtual item package throws the virtual item package again, canceling the presentation of the virtual item package in the picture of the virtual scene; or,
when a virtual object which throws the virtual prop packet is attacked and died in a virtual scene, canceling to present the virtual prop packet in a picture of the virtual scene; or,
and when the first virtual item in the virtual item packet is picked up completely, canceling the presentation of the virtual item packet in the picture of the virtual scene.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the computer device executes the method for processing the virtual prop according to the embodiment of the present application.
The embodiment of the application provides a computer-readable storage medium storing executable instructions, where the executable instructions are stored, and when being executed by a processor, the executable instructions will cause the processor to execute the processing method of the virtual prop provided by the embodiment of the application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EP ROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.
Claims (22)
1. A method for processing a virtual item, the method comprising:
presenting a virtual item packet containing at least two first virtual items in a picture of a virtual scene;
controlling the virtual object to move towards the virtual item package in response to a motion instruction for the virtual object in the picture;
when the virtual object moves to a target area and the virtual item package is in a pickable state, controlling the virtual object to pick up a first virtual item in the virtual item package, and
and presenting attribute change indication information of the second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
2. The method of claim 1, wherein prior to presenting the virtual item package containing the at least two first virtual items in the screen of the virtual scene, the method further comprises:
presenting an operation control of the virtual prop package;
when the operation control is in an activated state, the virtual object is controlled to throw the virtual prop packet in response to a trigger operation for the operation control.
3. The method of claim 2, wherein prior to said controlling said virtual object to throw said virtual item package, said method further comprises:
acquiring the state of the virtual object;
when the state of the virtual object represents that the virtual object is in a state capable of throwing the virtual item package, triggering and controlling the virtual object to throw the virtual item package;
when the state of the virtual object represents that the virtual object is in a state that the virtual item packet cannot be thrown, presenting indication information indicating that the virtual item packet is thrown unsuccessfully.
4. The method of claim 2, wherein said controlling said virtual object to throw said virtual item package comprises:
acquiring a first position of the virtual object in the virtual scene and a second position which is a target distance away from the first position along the orientation of the virtual object;
when no barrier exists between the first position and the second position, controlling the virtual object to throw the virtual prop packet to the second position.
5. The method of claim 2, wherein after said controlling said virtual object to throw said virtual item package, said method further comprises:
acquiring a throwing position of the virtual item package when the virtual object releases the virtual item package;
and when the throwing position of the virtual prop packet does not meet the position throwing condition, presenting indication information indicating that the virtual prop packet fails to throw.
6. The method of claim 1, wherein the method further comprises:
presenting a map thumbnail of the virtual scene;
and presenting the position information of the virtual object and the virtual item package in the map thumbnail, wherein the position information is used for controlling the virtual object to move towards the virtual item package based on the position information.
7. The method of claim 1, wherein after the controlling the movement of the virtual object toward the virtual item package, the method further comprises:
when the virtual item package is in a pickup state, displaying the virtual item package by adopting a first display style;
and when the virtual prop package is in a non-picking state, displaying the virtual prop package in a second display style different from the first display style.
8. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the virtual item package when the virtual object moves to a target area and the virtual item package is in a pickable state comprises:
when the target area is a sensing area used for sensing a virtual object by the virtual item package, presenting the sensing area;
when the virtual object moves to the sensing area and the virtual prop package is in a pickup state, the virtual object is triggered to pick up a first virtual prop in the virtual prop package through the sensing action of the virtual prop package.
9. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the package of virtual items comprises:
presenting a pick-up function icon of the virtual item package;
and in response to the triggering operation aiming at the picking-up function icon, controlling the virtual object to pick up a first virtual item in the virtual item package.
10. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the package of virtual items comprises:
when the types of the first virtual props in the virtual prop package comprise at least two types, displaying the at least two types of first virtual props in different display modes;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop.
11. The method of claim 10, wherein said controlling the virtual object to pick up a first virtual item of a target type that matches the type of the second virtual item comprises:
when the types of the second virtual props comprise at least two types, obtaining the type of the second virtual prop with the highest consumption degree in the at least two types of the second virtual props;
and controlling the virtual object to pick up the first virtual prop with the target type matched with the type of the second virtual prop with the highest consumption degree.
12. The method of claim 10, wherein said controlling the virtual object to pick up a first virtual item of a target type that matches the type of the second virtual item comprises:
when the types of the second virtual props comprise at least two types, acquiring the type of the second virtual prop with the highest preference degree in the at least two types of second virtual props based on the use preference of the virtual object for each second virtual prop;
and controlling the virtual object to pick up the first virtual prop of the target type matched with the type of the second virtual prop with the highest preference degree.
13. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the package of virtual items comprises:
when the types of the first virtual props in the virtual prop packages comprise at least two types, presenting the lethality index of each first virtual prop in the virtual prop packages aiming at a target virtual object;
and controlling the virtual object to pick up the first virtual prop with the highest lethality index.
14. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the package of virtual items comprises:
when the types of the first virtual props in the virtual prop package comprise at least two types, presenting the times of picking up each first virtual prop;
and controlling the virtual object to pick up the first virtual prop with the maximum picking-up times.
15. The method of claim 1, wherein said controlling the virtual object to pick up a first virtual item in the package of virtual items comprises:
when the types of the first virtual props in the virtual prop package comprise at least two types, displaying the at least two types of first virtual props in different display modes;
acquiring the assigned role of the virtual object in the team;
and controlling the virtual object to pick up a first virtual prop of a target type matched with the role.
16. The method of claim 1, wherein said presenting attribute change indicating information for the second virtual item comprises:
and presenting an attribute change special effect of the second virtual item, wherein the attribute change special effect is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
17. The method of claim 1, wherein prior to said controlling said virtual object to pick up a first virtual item in said package of virtual items, said method further comprises:
presenting an attribute value of the second virtual prop, wherein the attribute value is a first attribute value;
correspondingly, the presenting of the attribute change indication information of the second virtual item includes:
and the attribute value of the second virtual prop is presented and changed from the first attribute value to the second attribute value.
18. The method of claim 1, wherein said presenting attribute change indicating information for the second virtual item comprises:
and when the second virtual prop is a virtual protection prop carried or worn by the virtual object, presenting indication information that the protection attribute of the virtual protection prop changes.
19. The method of claim 1, wherein after presenting a virtual item package containing at least two first virtual items in a screen of a virtual scene, the method further comprises:
presenting countdown corresponding to the virtual item packet, and when the countdown is zero, canceling to present the virtual item packet in a picture of a virtual scene; or,
when the virtual object throwing the virtual item package throws the virtual item package again, canceling the presentation of the virtual item package in the picture of the virtual scene; or,
when a virtual object which throws the virtual prop packet is attacked and died in a virtual scene, canceling to present the virtual prop packet in a picture of the virtual scene; or,
and when the first virtual item in the virtual item packet is picked up completely, canceling the presentation of the virtual item packet in the picture of the virtual scene.
20. An apparatus for processing a virtual item, the apparatus comprising:
the first presentation module is used for presenting a virtual item packet containing at least two first virtual items in a picture of a virtual scene;
the motion control module is used for responding to a motion instruction aiming at a virtual object in the picture, and controlling the virtual object to move towards the virtual item packet;
the picking control module is used for controlling the virtual object to pick up a first virtual prop in the virtual prop packet when the virtual object moves to a target area and the virtual prop packet is in a picking state;
and the second presentation module is used for presenting attribute change indication information of the second virtual item based on the picked first virtual item, wherein the attribute change indication information is used for indicating that the attribute of the second virtual item equipped by the virtual object changes.
21. An electronic device, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions stored in the memory, and implement the processing method of the virtual item according to any one of claims 1 to 19.
22. A computer-readable storage medium storing executable instructions for implementing the method of processing a virtual item of any one of claims 1 to 19 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011063621.6A CN112121433B (en) | 2020-09-30 | 2020-09-30 | Virtual prop processing method, device, equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011063621.6A CN112121433B (en) | 2020-09-30 | 2020-09-30 | Virtual prop processing method, device, equipment and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112121433A true CN112121433A (en) | 2020-12-25 |
CN112121433B CN112121433B (en) | 2023-05-30 |
Family
ID=73843577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011063621.6A Active CN112121433B (en) | 2020-09-30 | 2020-09-30 | Virtual prop processing method, device, equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112121433B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022227958A1 (en) * | 2021-04-25 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual carrier display method and apparatus, device, and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509039A (en) * | 2018-03-27 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Method, apparatus, equipment and the storage medium of article are picked up in virtual environment |
CN110721468A (en) * | 2019-09-30 | 2020-01-24 | 腾讯科技(深圳)有限公司 | Interactive property control method, device, terminal and storage medium |
CN110841290A (en) * | 2019-11-08 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Processing method and device of virtual prop, storage medium and electronic device |
JP2020044139A (en) * | 2018-09-19 | 2020-03-26 | 株式会社コロプラ | Game program, game method, and information processor |
CN111330274A (en) * | 2020-02-20 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
-
2020
- 2020-09-30 CN CN202011063621.6A patent/CN112121433B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509039A (en) * | 2018-03-27 | 2018-09-07 | 腾讯科技(深圳)有限公司 | Method, apparatus, equipment and the storage medium of article are picked up in virtual environment |
JP2020044139A (en) * | 2018-09-19 | 2020-03-26 | 株式会社コロプラ | Game program, game method, and information processor |
CN110721468A (en) * | 2019-09-30 | 2020-01-24 | 腾讯科技(深圳)有限公司 | Interactive property control method, device, terminal and storage medium |
CN110841290A (en) * | 2019-11-08 | 2020-02-28 | 腾讯科技(深圳)有限公司 | Processing method and device of virtual prop, storage medium and electronic device |
CN111330274A (en) * | 2020-02-20 | 2020-06-26 | 腾讯科技(深圳)有限公司 | Virtual object control method, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022227958A1 (en) * | 2021-04-25 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual carrier display method and apparatus, device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112121433B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113181650B (en) | Control method, device, equipment and storage medium for calling object in virtual scene | |
CN112121414B (en) | Tracking method and device in virtual scene, electronic equipment and storage medium | |
US20230347244A1 (en) | Method and apparatus for controlling object in virtual scene, electronic device, storage medium, and program product | |
JP7447296B2 (en) | Interactive processing method, device, electronic device and computer program for virtual tools | |
CN110917623B (en) | Interactive information display method, device, terminal and storage medium | |
CN113181649B (en) | Control method, device, equipment and storage medium for calling object in virtual scene | |
CN113633964B (en) | Virtual skill control method, device, equipment and computer readable storage medium | |
US20230072503A1 (en) | Display method and apparatus for virtual vehicle, device, and storage medium | |
KR20220083803A (en) | Method, apparatus, medium and program product for state switching of virtual scene | |
CN112402946B (en) | Position acquisition method, device, equipment and storage medium in virtual scene | |
CN112057863A (en) | Control method, device and equipment of virtual prop and computer readable storage medium | |
CN113457151A (en) | Control method, device and equipment of virtual prop and computer readable storage medium | |
CN112044073A (en) | Using method, device, equipment and medium of virtual prop | |
CN114432701A (en) | Ray display method, device and equipment based on virtual scene and storage medium | |
CN112121432B (en) | Control method, device and equipment of virtual prop and computer readable storage medium | |
CN112717394B (en) | Aiming mark display method, device, equipment and storage medium | |
CN112156472B (en) | Control method, device and equipment of virtual prop and computer readable storage medium | |
CN112121433B (en) | Virtual prop processing method, device, equipment and computer readable storage medium | |
CN113633991B (en) | Virtual skill control method, device, equipment and computer readable storage medium | |
CN114146413B (en) | Virtual object control method, device, equipment, storage medium and program product | |
WO2024183473A1 (en) | Virtual scene display method and apparatus, and device, storage medium and program product | |
KR20240046594A (en) | Partner object control methods and devices, and device, media and program products | |
CN116726499A (en) | Position transfer method, device, equipment and storage medium in virtual scene | |
CN118286699A (en) | Interaction method, device, equipment, medium and program product based on virtual scene | |
CN117654038A (en) | Interactive processing method and device for virtual scene, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |