CN112138374A - Virtual object attribute value control method, computer device, and storage medium - Google Patents

Virtual object attribute value control method, computer device, and storage medium Download PDF

Info

Publication number
CN112138374A
CN112138374A CN202011103315.0A CN202011103315A CN112138374A CN 112138374 A CN112138374 A CN 112138374A CN 202011103315 A CN202011103315 A CN 202011103315A CN 112138374 A CN112138374 A CN 112138374A
Authority
CN
China
Prior art keywords
limb
virtual object
value
virtual
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011103315.0A
Other languages
Chinese (zh)
Other versions
CN112138374B (en
Inventor
葛懿宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011103315.0A priority Critical patent/CN112138374B/en
Publication of CN112138374A publication Critical patent/CN112138374A/en
Application granted granted Critical
Publication of CN112138374B publication Critical patent/CN112138374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/45Controlling the progress of the video game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Abstract

The embodiment of the application discloses a virtual object attribute value control method, computer equipment and a storage medium, and belongs to the technical field of virtual scenes. The method comprises the following steps: responding to the fact that the designated limb part in the virtual object is hit by the virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; acquiring collision conditions of the extension ray and at least two limb parts of the virtual object; determining an actual hitting position according to the collision condition; and modifying the respective assigned attribute values of the at least two limb parts according to the actual hit part. The scheme provides the method for carrying out ray detection on the virtual bullet so as to determine the limb part which is preferentially subjected to the designated attribute value modification, and the scheme for modifying the respective designated attribute value of the limb part reduces the survival time of the virtual object in the process of being hit, thereby avoiding unnecessary prolonging of the duration time of the virtual scene and further saving the electric quantity and the data flow consumed by the terminal.

Description

Virtual object attribute value control method, computer device, and storage medium
Technical Field
The present application relates to the field of virtual scene technologies, and in particular, to a virtual object attribute value control method, a computer device, and a storage medium.
Background
In a game application program that involves a game in which a virtual object is played by counting blood volume, for example, in a first person shooter game, the life value of the virtual object to be played is reduced after an attack, and the body of the virtual object may be divided into different parts in order to simulate a real playing process.
In the related art, when a virtual object is hit by a bullet, a hit portion of the virtual object can be determined by collision detection, thereby achieving control of reducing the amount of blood corresponding to the hit portion.
However, in the related art, the situation that the blood volume of the aimed part of the bullet is blocked by other parts can occur due to the scheme that the blood volume of the actual part is reduced by directly determining the actual part hit by the bullet through collision detection, so that the situation that the blood volume of the aimed part cannot be reduced can not be achieved, for example, when the bullet is launched according to the current track, the chest part of a virtual object can be hit, and the arm of the virtual object is just blocked at the chest part, the bullet can only hit the arm part and the blood volume of the arm part is controlled to be reduced, so that both shooting parties in the shooting process need to hit the key part for a long time to eliminate the opponent, the time for single-round fight is long, and the electric quantity and the data flow consumed by the terminal are wasted.
Disclosure of Invention
The embodiment of the application provides a virtual object attribute value control method, computer equipment and a storage medium, wherein the technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for controlling a virtual object attribute value, where the method includes:
in response to the fact that a designated limb part in a virtual object is hit by a virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; the virtual object comprises at least two limb portions;
acquiring collision conditions of the extension ray and the at least two limb parts of the virtual object;
determining an actual hitting position according to the collision condition; the actual hit part is the limb part which is modified preferentially according to the designated attribute value;
and modifying the designated attribute values of the at least two limb parts according to the actual hitting part.
In one aspect, an embodiment of the present application provides a method for controlling a virtual object attribute value, where the method includes:
displaying a virtual scene picture, wherein the virtual scene picture comprises a virtual object and a specified attribute value of the virtual object; the virtual object comprises at least two limb portions; the specified attribute values are corresponding to each of the at least two limb portions;
responding to the fact that a designated limb part in the virtual object is hit by a virtual bullet, and displaying the designated attribute values of the at least two limb parts after being modified respectively according to an actual hit part; the actual hitting position is determined according to the collision condition of the extension ray and the at least two limb positions of the virtual object; the extension ray is obtained by performing ray extension on the running track of the virtual bullet.
On the other hand, an embodiment of the present application provides a virtual object attribute value control apparatus, where the apparatus includes:
the ray determination module is used for responding to the fact that a designated limb part in a virtual object is hit by a virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; the virtual object comprises at least two limb portions;
a collision obtaining module, configured to obtain collision situations between the extended ray and the at least two limb portions of the virtual object;
the part determining module is used for determining an actual hitting part according to the collision condition; the actual hit part is the limb part which is modified preferentially according to the designated attribute value;
and the attribute value modification module is used for modifying the respective assigned attribute values of the at least two limb parts according to the actual hit part.
In one possible implementation, the at least two limb locations include at least one of a thoracic location, a head and neck location, a limb location, and a gastric location;
the designated limb portion includes the limb portion.
In one possible implementation, the location determining module includes:
and the first part determining submodule is used for responding to the collision between the extension ray and the chest part or between the extension ray and the head and neck part, and determining the limb part collided with the extension ray as the actual hitting part.
In one possible implementation, the location determining module includes:
and the second part determining submodule is used for responding to the collision of the extension ray and the limb part or the stomach part and determining the appointed limb part as the actual hit part.
In one possible implementation, the location determining module includes:
and the third part determining sub-module is used for responding to the fact that the extension ray does not collide with the at least two limb parts, and determining the designated limb part as the actual hitting part.
In one possible implementation manner, the attribute value modification module includes:
a first modification submodule, configured to, in response to that the actual hit location is a first type limb location, subtract the specified attribute value of the actual hit location; the first type limb site comprises at least one of the chest site, the head and neck site;
and the state determination submodule is used for determining that the virtual object is in a defeated state in response to the reduction of the designated attribute value to zero.
In one possible implementation manner, the attribute value modification module includes:
a second modification submodule, configured to subtract the specified attribute value of the actual hit location in response to the actual hit location being a second type limb location; the second type of limb portion comprises at least one of the limb portion and the stomach portion;
a residual value determination submodule, configured to determine a residual loss value in response to a decrease of the specified attribute value of the actual hit location to zero; the residual loss value is the difference value between the loss value corresponding to the virtual bullet and the specified attribute value of the actual hit part;
the mean value obtaining sub-module is used for responding to the existence of the residual loss numerical value, equally dividing the residual loss numerical value according to the number of other limb parts and obtaining a residual loss mean value;
and the mean value deduction sub-module is used for deducting the residual loss mean value from the designated attribute values of other limb parts respectively.
In one possible implementation, the apparatus further includes:
the gain determining module is used for responding to the fact that the virtual object uses the designated virtual prop and determining a total gain value corresponding to the designated virtual prop;
a loss part acquisition module, configured to acquire the limb part with the lost specified attribute value as each attribute loss part;
a value determining module, configured to sequentially allocate the total gain value according to a priority corresponding to each attribute loss part, and determine a gain value corresponding to each attribute loss part;
and the attribute value reply module is used for sequentially replying the designated attribute values of the attribute loss positions based on the gain values corresponding to the attribute loss positions.
In one possible implementation manner, the numerical value determination module includes:
the priority part determining submodule is used for determining a main gain part in each attribute loss part based on touch operation;
a main value determining submodule for assigning the gain value corresponding to the main gain section from the total gain value;
the priority obtaining submodule is used for responding to the attribute loss value corresponding to the gain total value larger than the main gain part and obtaining the priorities corresponding to other attribute loss parts;
the value determination submodule is used for sequentially distributing the gain values to other attribute loss parts from the residual gain values according to the priorities corresponding to the other attribute loss parts; the residual gain value is a difference between the total gain value and an attribute loss value corresponding to the main gain portion.
In one possible implementation manner, the preferential location determination submodule includes:
the control display unit is used for respectively displaying and selecting the control corresponding to each attribute loss part;
and the priority part determining unit is used for responding to the received touch operation of a target control in the selection controls and determining the attribute loss part corresponding to the target control as the main gain part.
In one possible implementation, the apparatus further includes:
the icon display module is used for displaying the attribute value display icon corresponding to the virtual object; the attribute value display icon is used for displaying the at least two limb parts and the designated attribute values of the limb parts;
and the icon superposition module is used for responding to the condition that the designated attribute value of the limb part is zero, and superposing and displaying the damaged icon on the attribute value display icon.
On the other hand, an embodiment of the present application provides a virtual object attribute value control apparatus, where the apparatus includes:
the picture display module is used for displaying a virtual scene picture, and the virtual scene picture comprises a virtual object and an appointed attribute value of the virtual object; the virtual object comprises at least two limb portions; the specified attribute values are corresponding to each of the at least two limb portions;
the attribute value display module is used for responding to the fact that the appointed limb part in the virtual object is hit by the virtual bullet, and displaying the appointed attribute value after the at least two limb parts are modified respectively according to the actual hit part; the actual hitting position is determined according to the collision condition of the extension ray and the at least two limb positions of the virtual object; the extension ray is obtained by performing ray extension on the running track of the virtual bullet.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or a set of instructions, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the virtual object attribute value control method according to the foregoing aspect.
In another aspect, an embodiment of the present application provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the virtual object attribute value control method according to the above aspect.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the virtual object attribute value control method provided in the various alternative implementations of the above-described aspect.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
in the embodiment of the application, through carrying out ray detection to the virtual bullet, confirm the collision condition that extends the ray and virtual object's limb position, thereby confirm the limb position that preferentially appoints the attribute value to revise, in order to revise the respective appointed attribute value in limb position, thereby solved virtual object and utilized the limb position to shelter from the problem that the injury that thorax position and neck position lead to shared four limbs position, for example, shelter from the collision condition that the limb position at thorax position and neck position passes through extension ray and limb position, can confirm that the limb position that preferentially appoints the attribute value to revise is thorax position and neck position, thereby avoid the unnecessary extension of duration of virtual scene, and then electric quantity and the data flow that practice thrift the terminal and consume.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic diagram of a virtual object property value control system provided in an exemplary embodiment of the present application;
FIG. 2 is a schematic illustration of a display interface of a virtual scene provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic illustration of a flow of virtual object attribute value control provided by an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a flow of virtual object attribute value control provided by an exemplary embodiment of the present application;
FIG. 5 is a flowchart of a method for controlling a virtual object attribute value according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of an extended ray determination according to the embodiment of FIG. 5;
FIG. 7 is a diagram of a property value display icon in a designation interface according to the embodiment shown in FIG. 5;
FIG. 8 is a schematic view of an attribute value display icon in the battle interface according to the embodiment shown in FIG. 5;
FIG. 9 is a schematic illustration of the determination of an actual hit location in relation to the embodiment of FIG. 5;
FIG. 10 is a schematic illustration of a battle interface with the virtual object in a defeated state according to the embodiment shown in FIG. 5;
FIG. 11 is a graphical illustration of a specified interface mean average loss value according to the embodiment of FIG. 5;
FIG. 12 is a flow diagram of injury determination logic according to the embodiment of FIG. 5;
fig. 13 is a block diagram illustrating a configuration of a virtual object attribute value control apparatus according to an exemplary embodiment of the present application;
fig. 14 is a block diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Virtual scene: is a virtual scene that is displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene. Optionally, the virtual scene may also be used for virtual scene engagement between at least two virtual characters. Optionally, the virtual scene may also be used for a virtual firearm fight between at least two virtual characters. Optionally, the virtual scene may also be used for fighting between at least two virtual characters using a virtual firearm within a target area that may be continually smaller over time in the virtual scene.
Virtual object: refers to a movable object in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, a virtual vehicle. Optionally, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereo model created based on an animated skeleton technique. Each virtual object has its own shape, volume and orientation in the three-dimensional virtual scene and occupies a portion of the space in the three-dimensional virtual scene.
Virtual scenes are typically rendered based on hardware (e.g., a screen) in a terminal generated by an application in a computer device, such as a terminal. The terminal can be a mobile terminal such as a smart phone, a tablet computer or an electronic book reader; alternatively, the terminal may be a personal computer device such as a notebook computer or a stationary computer.
Virtual props: the tool is a tool which can be used by a virtual object in a virtual environment, and comprises a virtual weapon which can hurt other virtual objects, such as a pistol, a rifle, a sniper, a dagger, a knife, a sword, an axe and the like, and a supply tool such as a bullet, wherein a quick cartridge clip, a sighting telescope, a silencer and the like are arranged on the appointed virtual weapon, and can provide a virtual pendant with partial added attributes for the virtual weapon, and defense tools such as a shield, a armor, a armored car and the like.
First person shooter game: the shooting game is a shooting game that a user can play from a first-person perspective, and a screen of a virtual environment in the game is a screen that observes the virtual environment from a perspective of a first virtual object. In the game, at least two virtual objects carry out a single-game fighting mode in a virtual environment, the virtual objects achieve the purpose of survival in the virtual environment by avoiding the injury initiated by other virtual objects and the danger (such as poison circle, marshland and the like) existing in the virtual environment, when the life value of the virtual objects in the virtual environment is zero, the life of the virtual objects in the virtual environment is ended, and finally the virtual objects which survive in the virtual environment are winners. Optionally, each client may control one or more virtual objects in the virtual environment, with the time when the first client joins the battle as a starting time and the time when the last client exits the battle as an ending time. Optionally, the competitive mode of the battle may include a single battle mode, a double group battle mode or a multi-person group battle mode, and the battle mode is not limited in the embodiment of the present application.
Referring to fig. 1, a schematic diagram of a virtual object attribute value control system according to an embodiment of the present application is shown. The virtual object attribute value control system may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and the application 111 may be a multiplayer online battle program. When the first terminal runs the application 111, a user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 may be any one of military Simulation programs, Multiplayer Online Battle Arena Games (MOBA), large-escape shooting Games, and Simulation strategy Games (SLG). In the present embodiment, the application 111 is an FPS (First Person shooter Game) for example. The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment for activity, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as a simulated character or an animation character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and the application 131 may be a multiplayer online battle program. When the second terminal 130 runs the application 131, a user interface of the application 131 is displayed on the screen of the second terminal 130. The client may be any one of a military simulation program, an MOBA game, a large fleeing and killing shooting game, and an SLG game, and in this embodiment, the application 131 is an FPS game as an example. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform an activity, where the second virtual object may be referred to as a master virtual character of the second user 132. Illustratively, the second virtual object is a second virtual character, such as a simulated character or an animation character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, a friend relationship, or a temporary communication right. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have a hostile relationship.
Optionally, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may generally refer to one of a plurality of terminals, and the second terminal 130 may generally refer to another of the plurality of terminals, and this embodiment is only illustrated by the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and include: at least one of a smartphone, a tablet, an e-book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but there are a plurality of other terminals that may access the server 120 in different embodiments. Optionally, one or more terminals are terminals corresponding to the developer, a development and editing platform for supporting the application program in the virtual environment is installed on the terminal, the developer can edit and update the application program on the terminal and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 can download the application program installation package from the server 120 to update the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center. The server 120 is used to provide background services for applications that support a three-dimensional virtual environment. Optionally, the server 120 undertakes primary computational work and the terminals undertake secondary computational work; alternatively, the server 120 undertakes the secondary computing work and the terminal undertakes the primary computing work; alternatively, the server 120 and the terminal perform cooperative computing by using a distributed computing architecture.
In one illustrative example, the server 120 includes a memory 121, a processor 122, a user account database 123, a combat services module 124, and a user-oriented Input/Output Interface (I/O Interface) 125. The processor 122 is configured to load an instruction stored in the server 120, and process data in the user account database 123 and the combat service module 124; the user account database 123 is configured to store data of a user account used by the first terminal 110, the second terminal 130, and other terminals, such as a head portrait of the user account, a nickname of the user account, a fighting capacity index of the user account, and a service area where the user account is located; the fight service module 124 is used for providing a plurality of fight rooms for the users to fight, such as 1V1 fight, 3V3 fight, 5V5 fight and the like; the user-facing I/O interface 125 is used to establish communication with the first terminal 110 and/or the second terminal 130 through a wireless network or a wired network to exchange data.
The virtual scene may be a three-dimensional virtual scene, or the virtual scene may also be a two-dimensional virtual scene. Taking the example that the virtual scene is a three-dimensional virtual scene, please refer to fig. 2, which shows a schematic view of a display interface of the virtual scene according to an exemplary embodiment of the present application. As shown in fig. 2, the display interface of the virtual scene includes a scene screen 200, and the scene screen 200 includes a currently controlled virtual object 210, an environment screen 220 of the three-dimensional virtual scene, and a virtual object 240. The virtual object 240 may be a virtual object controlled by a user or a virtual object controlled by an application program corresponding to other terminals.
In fig. 2, the currently controlled virtual object 210 and the virtual object 240 are three-dimensional models in a three-dimensional virtual scene, and the environment picture of the three-dimensional virtual scene displayed in the scene picture 200 is an object observed from the perspective of the currently controlled virtual object 210, for example, as shown in fig. 2, the environment picture 220 of the three-dimensional virtual scene displayed from the perspective of the currently controlled virtual object 210 is the ground 224, the sky 225, the horizon 223, the hill 221, and the factory building 222.
The currently controlled virtual object 210 may release skills or use virtual props, move and execute a specified action under the control of the user, and the virtual object in the virtual scene may show different three-dimensional models under the control of the user, for example, a screen of the terminal supports touch operation, and a scene screen 200 of the virtual scene includes a virtual control, so that when the user touches the virtual control, the currently controlled virtual object 210 may execute the specified action in the virtual scene and show a currently corresponding three-dimensional model.
The computer device may determine that the virtual object is shot by a virtual bullet in a virtual scene, and then modify the respective specified attribute values of at least two limb portions, by using the virtual object attribute value control method, and fig. 3 shows a schematic diagram of a flow of virtual object attribute value control provided in an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 3, the computer device may control the attribute values of the virtual object by performing the following steps.
Step 301, responding to the fact that the designated limb part in the virtual object is hit by the virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; the virtual object includes at least two limb portions.
In the embodiment of the application, when a designated limb part in a virtual object is hit by a virtual bullet, the computer device acquires the running track of the virtual bullet, extends rays of the running track after the virtual bullet collides and stops running, and determines the extended rays as the extended rays of the virtual bullet.
In one possible implementation, a virtual object is present in the virtual scene, and the body of the virtual object is divided into at least two limb parts.
Wherein a limb part is an area on the body of the virtual object that can support individual statistically specified attribute values. The extension ray is a ray that the virtual bullet extends along the trajectory before hitting the virtual object.
Step 302, acquiring the collision condition of the extension ray and at least two limb parts of the virtual object.
In the embodiment of the application, when a virtual object is hit by a virtual bullet, the virtual bullet stops running after being collided to form an actual running track of the virtual bullet, the running track of the virtual bullet is extended by rays based on ray detection after the virtual bullet stops running, and the computer equipment acquires the collision condition between the extended ray part and the limb part of the virtual object.
The collision condition can be divided into that the extended ray collides with the virtual object, or that the extended ray does not collide with the virtual object.
Step 303, determining an actual hitting position according to the collision condition; the actual hit location is the limb location that is preferentially modified for the specified attribute value.
In the embodiment of the application, the computer device determines the actual hit part corresponding to the collision situation according to the acquired collision situation.
In one possible implementation, different actual hitting positions are determined respectively when the extended ray collides with the virtual object and when the extended ray does not collide with the virtual object.
And step 304, modifying the respective assigned attribute values of the at least two limb parts according to the actual hit part.
In an embodiment of the application, the computer device modifies the assigned attribute values of each of the at least two limb portions of the virtual object according to the determined actual hitting portion.
In one possible implementation, the assigned attribute values of each of the at least two limb portions are reduced in accordance with the determined actual impact location.
For example, when the specified attribute value is a vital value, the computer device may decrease the vital value of each of the at least two limb portions based on the actual hit portion.
In summary, the solution shown in the embodiment of the present application determines the collision situation between the extension ray and the limb of the virtual object by performing ray detection on the virtual bullet, thereby determining the limb parts which are preferentially modified by the designated attribute values so as to modify the respective designated attribute values of the limb parts, thereby solving the problem that the injury of the virtual object caused by using the four limbs to shield the thorax part and the head and neck part shares the four limbs parts, for example, the collision between the four limbs at the thorax part and the head and neck part and the limb parts through the extension ray, the limb parts preferentially modified by the designated attribute value can be determined as a chest part and a head and neck part, therefore, the survival time of the virtual object in the process of being hit is reduced, unnecessary prolonging of the duration time of the virtual scene is avoided, and the electric quantity and the data flow consumed by the terminal are saved.
Fig. 4 is a diagram illustrating a flow of virtual object attribute value control according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 4, the computer device may control the attribute values of the virtual object by performing the following steps.
Step 401, displaying a virtual scene picture, wherein the virtual scene picture comprises a virtual object and an assigned attribute value of the virtual object; the virtual object comprises at least two limb parts; the specified attribute values are for each of the at least two limb portions.
Step 402, responding to the fact that the appointed limb part in the virtual object is hit by the virtual bullet, and displaying the respective modified appointed attribute values of at least two limb parts according to the actual hit part; the actual hitting position is determined according to the collision condition of the extension ray and at least two limb positions of the virtual object; the extension ray is obtained by extending the moving track of the dummy bullet.
In summary, the solution shown in the embodiment of the present application determines the collision situation between the extension ray and the limb of the virtual object by performing ray detection on the virtual bullet, thereby determining the limb parts which are preferentially modified by the designated attribute values so as to modify the respective designated attribute values of the limb parts, thereby solving the problem that the injury of the virtual object caused by using the four limbs to shield the thorax part and the head and neck part shares the four limbs parts, for example, the collision between the four limbs at the thorax part and the head and neck part and the limb parts through the extension ray, the limb parts preferentially modified by the designated attribute value can be determined as a chest part and a head and neck part, therefore, the survival time of the virtual object in the process of being hit is reduced, unnecessary prolonging of the duration time of the virtual scene is avoided, and the electric quantity and the data flow consumed by the terminal are saved.
Fig. 5 is a flowchart illustrating a method for controlling a virtual object attribute value according to an exemplary embodiment of the present application. The method may be executed by a computer device, where the computer device may be a terminal or a server, or the computer device may include the terminal and the server. As shown in fig. 5, with the computer device being a terminal, the terminal can control the designated attribute values of each of the at least two limb portions of the virtual object by performing the following steps.
Step 501, responding to the fact that the designated limb part in the virtual object is hit by the virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray.
In the embodiment of the application, when the designated limb part of the virtual object is hit by the virtual bullet, the moving track of the virtual bullet is obtained, the ray extension is performed on the moving track, and the extended ray is determined as the extension ray.
Wherein the virtual object comprises at least two limb parts. A limb part is an area on the body of a virtual object that can support individual statistically specified attribute values. The designated limb parts include limb parts.
In one possible implementation, the at least two limb locations include at least one of a thoracic location, a head and neck location, a limb location, and a gastric location.
The four limbs comprise arm parts and leg and foot parts, the arm parts comprise a left arm part and a right arm part, and the leg and foot parts comprise a left leg part and a right leg part.
In one possible implementation, the designated limb portion is an arm portion of a four limb portion. When the arm part of the virtual object is hit by the virtual bullet, the moving track of the virtual bullet is obtained, the ray extension is carried out on the moving track, and the extended ray is determined to be the extended ray.
Wherein, at least two limb parts are divided into a first type limb part and a second type limb part according to different types of the limb parts.
Wherein the first type of limb part comprises at least one of a chest part and a head and neck part; the second type of limb portion includes at least one of a limb portion and a stomach portion.
For example, FIG. 6 illustrates an extended ray determination diagram provided by an exemplary embodiment of the present application. As shown in fig. 6, when the first virtual object 61 is a virtual object controlled by another user, the second virtual object 62 is a virtual object controlled by a terminal, and the first virtual object 61 is shot at the second virtual object 62, a virtual bullet hits the second virtual object 62 along a trajectory 63, a hit point 65 belongs to the left arm part, the hit point 65 is a starting point of ray extension, and an extended ray 64 is obtained by extending the trajectory 63 of the virtual bullet with a ray. The extended ray 64 collides with the second virtual object 62 at the thorax region, and the collision point 66 of the extended ray 64 is located at the thorax region.
In one possible implementation manner, the terminal displays an attribute value display icon corresponding to the virtual object.
The attribute value display icon is used for displaying at least two limb parts and designated attribute values of the limb parts.
In one possible implementation, the attribute value display icon is presented in a designated interface.
The designated interface can be a display interface of the virtual object for displaying the backpack content and the health condition of the virtual object.
For example, fig. 7 is a schematic diagram illustrating a property value display icon in a designated interface according to an exemplary embodiment of the present application. As shown in fig. 7, the attribute value display icon in the designated interface includes a virtual object body silhouette 71, and names and designated attribute value values 72 corresponding to the respective body parts.
In another possible implementation, the attribute value display icon is displayed in the battle interface.
The attribute value display icons displayed in the fighting interface can display the body silhouettes of the virtual objects after the equal scaling down.
For example, fig. 8 is a schematic diagram illustrating an attribute value display icon in a battle interface according to an exemplary embodiment of the present application. As shown in fig. 8, the battle interface includes a virtual object 81 and an attribute value display icon 82 corresponding to the virtual object 81, the attribute value display icon 82 is displayed in the upper left corner of the battle interface, and the attribute value display icon 82 is a body silhouette of the virtual object reduced in equal scale. The left leg portion 84 is displayed in a damaged state because the attribute value is zero, and a damaged icon 83 is displayed in the vicinity of the left leg portion.
Step 502, acquiring collision conditions of the extension ray and at least two limb parts of the virtual object.
In the embodiment of the application, the collision condition of the extension ray and the virtual object is obtained based on a ray detection algorithm.
In one possible implementation manner, the collision condition refers to whether the extension ray collides with the virtual object or not, and the type of the part corresponding to the limb part where the extension ray collides with the virtual object if the collision occurs.
The part types of the limb parts can be divided into a first type limb part and a second type limb part.
For example, the first type limb part may include a chest part and a head and neck part, and the designated attribute value corresponding to the first type limb part directly determines the life state of the virtual object; the second type of limb portion may include a limb portion and a stomach portion, and the decrease of the designated attribute value corresponding to the second type of limb portion may cause a negative effect on the virtual object.
And step 503, determining the actual hit part according to the collision condition.
In the embodiment of the application, the actual hit part corresponding to the collision situation is determined according to the acquired collision situation.
Wherein, the actual hit part is the limb part which is modified preferentially according to the assigned attribute value.
In one possible implementation, the collision scenario is divided into three, the first collision scenario is when the extension ray collides with a first type of limb portion, the second collision scenario is when the extension ray collides with a second type of limb portion, and the third collision scenario is when the extension ray does not collide with a limb portion.
The determination of the actual hit point according to the collision situation can be divided into the following three cases.
1) And in response to the extension ray colliding with the chest part or the head and neck part, determining the limb part colliding with the extension ray as the actual hit part.
When the acquired extension ray collides with a limb part of which the part type is the first type limb part, the first type limb part generating the collision can be determined to be the actual hit part.
For example, fig. 9 is a schematic diagram illustrating the determination of an actual hitting location according to an exemplary embodiment of the present application. As shown in fig. 9, in the battle interface, a first virtual object 91 controlled by the terminal of our party is shot by a second virtual object 92 of the enemy, the left arm of the first virtual object 91 is shielded on the head, when a virtual bullet runs according to a running track, the running is stopped after colliding with the left arm, then the running track is extended by rays, the obtained extended rays collide with the head and neck part of the first virtual object, the actual struck part can be determined to be the head and neck part, an attribute value display icon 93 is displayed at the upper left corner of the battle interface, and the designated display is performed at the head and neck part.
2) In response to the extension ray colliding with the limb portion, or the extension ray colliding with the stomach portion, the designated limb portion is determined to be an actual hit portion.
When the acquired extension ray collides with a limb part of which the part type is the second limb part, the limb part of which the virtual bullet operation track collides with the limb part can be determined to be an actual hit part, and the limb part of which the virtual bullet operation track collides with the limb part can be the first type limb part or the second limb part.
In one possible implementation, the limb part where the trajectory of the virtual bullet collides with the virtual object is the limb part where the starting point of the extension ray is located.
3) In response to the extended ray not colliding with the at least two limb portions, determining the designated limb portion as an actual hit portion.
When it is determined that the extension ray does not collide with the limb part, it may be determined that the limb part where the virtual bullet moving trajectory collides with the limb part is the actual hit part, and the limb part where the virtual bullet moving trajectory collides with the limb part may be the first type limb part or the second type limb part.
And step 504, modifying the respective assigned attribute values of the at least two limb parts according to the actual hit part.
In the embodiment of the application, according to the type of the part to which the actually hit part belongs and the condition of the designated attribute value, the designated attribute value of the actually hit part is modified, or in addition to the modification of the designated attribute value of the actually hit part, the respective designated attribute values of other limb parts are modified.
In one possible implementation, the scheme for modifying the specified attribute value is determined according to the type of the location of the actual hit location.
The following scheme is provided for determining and modifying the designated attribute value according to the type of the part actually hit by the part.
1) And deducting the specified attribute value of the actual hitting position in response to the fact that the actual hitting position is the first type limb position, and determining that the virtual object is in a defeated state in response to the fact that the specified attribute value is reduced to zero.
When the actual hit part is the first type limb part, the designated attribute value of the actual hit part is reduced according to the loss value of the virtual bullet to the designated attribute value, and when the designated attribute value of the actual hit part is less than or equal to the loss value of the virtual bullet to the designated attribute value, the designated attribute value of the actual hit part is modified to be zero, and the virtual object is determined to be in a defeated state.
The defeated state may refer to that the virtual object is in a dead state or a dead state, that is, the virtual object in the defeated state cannot continue to fight against the virtual object.
For example, fig. 10 is a schematic diagram of a battle interface provided by an exemplary embodiment of the present application when a virtual object is in a defeated state. As shown in fig. 10, in the course of the battle, when the actually hit position of the first virtual object is the head and neck position and the value of the loss is equal to or greater than the specified attribute value of the head and neck position of the first virtual object, the first virtual object is changed to the "box" 1001, and the attribute value display icon 1002 in which the head and neck position is changed to black is displayed in the upper left corner of the battle interface, and at this time, the viewing angle of the virtual scene screen to be observed is changed from the first person viewing angle to the third person viewing angle.
2) Deducting the designated attribute value of the actual hit part in response to the fact that the actual hit part is the second type limb part; determining a residual loss value in response to the designated attribute value of the actual hit location decreasing to zero; in response to the existence of the residual loss value, equally dividing the residual loss value according to the number of other limb parts to obtain a residual loss average value; and respectively deducting the residual loss mean value from the designated attribute values of other limb parts.
And the residual loss value is the difference value between the loss value corresponding to the virtual bullet and the specified attribute value of the actual hit part. The second type of limb portion includes at least one of a limb portion and a stomach portion.
When the actual hitting part is the second type limb part and the designated attribute value of the actual hitting part is smaller than the loss value, determining the difference value between the loss value and the designated attribute value as a residual loss value; equally dividing the residual loss value according to the number of other undamaged limb parts to obtain a residual loss average value; and controlling the designated attribute value of the actual hit part to be reduced to 0, so that the actual hit part is converted into a damage state, and simultaneously controlling the designated attribute values corresponding to other limb parts to reduce the residual loss mean value respectively.
In one possible implementation, in response to the specified attribute value of the limb portion being zero, the damage icon is displayed superimposed on the attribute value display icon. The limb part is in a damaged state.
In one possible implementation, a designated color is displayed at the corresponding limb portion of the attribute value display icon according to a loss condition of the designated attribute value of the limb portion.
For example, fig. 11 is a schematic diagram illustrating an average loss value of a given interface according to an exemplary embodiment of the present application. As shown in fig. 11, taking the case where the designated attribute value is a life value and the initial life value is a full value, when the actual hit part is a left leg part, the damage value of the virtual bullet is 95 point damage, the life value 1103 of the left leg part shows that the full value of the life value of the left leg part is 65 points, when the virtual bullet is shot, the life value 1103 of the left leg part is zero, and a damage icon 1104 is displayed near the display area of the life value 1003 of the left leg part, the left leg part 1102 of the virtual object body silhouette is displayed in black, and at the same time, since other undamaged limb parts include a chest part, a right arm part, a right leg part, a head and neck part, a left arm part, and a stomach part, and the number of other undamaged limb parts is 6, 30 points of the remaining loss value can be averaged to 5 points, the remaining loss average value is 5 points, and the life values of the other undamaged limb parts are each reduced by 5 points, and is shown in red in the other undamaged limb part 1101 of the virtual object body silhouette. When the designated attribute value of the limb part is a full value or greater than a designated threshold value, the limb part corresponding to the virtual object body silhouette can be displayed in green.
In one possible implementation, in response to the actual hit location being the second limb location, the assigned attribute value of the actual hit location is decreased; determining a residual loss value in response to the specified attribute value decreasing to zero; in response to the existence of the residual loss value, the residual loss value is equally divided according to the number of other limb parts and then multiplied by a specified multiple, so that a doubled residual loss average value is obtained; and controlling the assigned attribute values of other limb parts to respectively reduce the double residual loss mean value.
In one possible implementation, when the specified attribute value of the second limb portion is zero, the virtual object does not die, and the virtual object is controlled to enter different corresponding negative states according to different limb portions.
For example, when the blood volume at the left arm or the right arm of the virtual object is zero, the virtual object shakes its hand while being held, thereby simulating the unstable state of the hand damage of the virtual object. When the blood volume of the left leg part or the right leg part of the virtual object is zero, the virtual object generates lens shake in the walking state, so that the unstable walking state of the leg damage of the virtual object is simulated. When the blood volume of the stomach of the virtual object is zero, controlling the water content and hunger decline rate of the virtual object to be accelerated so as to simulate the rapid water physical power loss state of the damaged stomach of the virtual object.
And 505, responding to the virtual object using the designated virtual prop, and determining a gain value corresponding to each attribute loss part according to the priority of each attribute loss part.
In the embodiment of the application, when the virtual object uses the designated virtual prop to reply the designated attribute value, the gain value corresponding to each attribute loss part is determined according to the priority corresponding to the limb part of the virtual object with the designated attribute value loss.
In one possible implementation manner, in response to the virtual object using the designated virtual prop, determining a total gain value corresponding to the designated virtual prop; acquiring limb parts with lost specified attribute values as attribute loss parts; and sequentially distributing the total gain value according to the priority corresponding to each attribute loss part, and determining the gain value corresponding to each attribute loss part.
The priority of the limb parts can be a fixed sequence configured by a game plan or a sequence set by a user in a self-defining way.
For example, taking the designated attribute value as the life value, when the designated virtual prop is used by the virtual object, the total gain value corresponding to the designated virtual prop, that is, the therapeutic value is 200 points of life value, 60 points of life value loss of the chest region part, 50 points of life value loss of the left arm part, 60 points of life value loss of the stomach region, 40 points of life value loss of the right leg part are obtained, the attribute loss parts are determined as the chest region part, the left arm part, the stomach region part and the right leg part, the priority of the obtained limb parts is that the chest region part is larger than the stomach region part and is larger than the left arm part and the right leg part, and the right leg part life value loss value is smaller than the left arm part life value loss value, so the priority of the left arm part is larger than the priority of the right leg part. And sequentially distributing the total gain values according to the priorities corresponding to the attribute loss parts, so that the chest part is distributed with a 60-point gain value, the stomach part is distributed with a 60-point gain value, the left arm part is distributed with a 50-point gain value, and the right leg part is distributed with a 30-point gain value.
In one possible implementation, a main gain part in each attribute loss part is determined based on touch operation; distributing a gain value corresponding to the main gain part from the total gain value; responding to the attribute loss value corresponding to the main gain part when the total gain value is larger than the main gain part, and acquiring the priority corresponding to other attribute loss parts; and according to the priorities corresponding to the loss parts of other attributes, sequentially distributing gain values to the loss parts of other attributes from the residual gain values.
The main gain part is an attribute loss part to which a gain value is preferentially allocated. The residual gain value is the difference between the total gain value and the attribute loss value corresponding to the main gain portion. The other attribute loss positions are other limb positions than the main gain position among the respective attribute loss positions.
In one possible implementation, selecting controls are respectively displayed corresponding to each attribute loss part; and determining an attribute loss part corresponding to the target control as a main gain part in response to receiving touch operation on the target control in the selection controls.
For example, taking the designated attribute value as the life value, when the designated virtual prop is used by the virtual object, the total gain value corresponding to the designated virtual prop, that is, the therapeutic value is 200 points of life value, 60 points of life value loss of the chest region part, 50 points of life value loss of the left arm part, 60 points of life value loss of the stomach region, 40 points of life value loss of the right leg part are obtained, the attribute loss parts are determined to be the chest region part, the left arm part, the stomach region part and the right leg part, based on the touch operation on the selection control, the right leg part is selected as the main gain part, and the priority of the fixed limb part is obtained as the chest region part is greater than the stomach region and is greater than the left arm part and the right leg part, so the gain sequence is that the priority of the right leg part is greater than the chest region part and is greater than the stomach region and is greater than the left arm. And sequentially distributing the total gain values according to the priorities corresponding to the attribute loss parts, so that the right leg part is distributed with a 40-point gain value, the chest part is distributed with a 60-point gain value, the stomach part is distributed with a 60-point gain value, and the left arm part is distributed with a 40-point gain value.
Step 506, sequentially returning the designated attribute values of the attribute loss positions based on the gain values corresponding to the attribute loss positions.
In the embodiment of the present application, the specified attribute value of each attribute loss part is controlled to be added with the corresponding gain value according to the gain value corresponding to each attribute loss part.
For example, by using the method for controlling the attribute value of the virtual object in this embodiment, the process of controlling the blood volume loss of the virtual object can be realized, the blood volume of the virtual object can be divided into the blood volumes corresponding to the head, the chest, the left arm, the right arm, the stomach, the left leg and the right leg, and different blood volume values can be designed according to different parts. When the blood volume of each part does not reach 0, the blood volume changes of each part independently. When the blood volume of the part reaches 0, the blood strips of the part and the bone silhouette of the virtual object become black. When the blood volume in the head or chest is 0, the character dies. When the blood volume is 0 in the parts other than the head and the chest, the character does not die, but has a corresponding negative status according to the parts. If the site is injured again, the injury is multiplied by a site coefficient and then spread to all sites with blood volume not 0.
Wherein, the blood volume of the part can be displayed in a health tab in a HUD (Head Up Display) interface or a person backpack. A character silhouette facing a player can be placed in the middle of the health tab, blood strips are placed on the periphery of the character silhouette, the blood strips on the positions can correspond to the positions of the character silhouette one by one, and meanwhile, the colors of the position silhouette and the blood strips of the virtual object can gradually transition from green to red along with the reduction of blood volume. The same figure silhouette can be placed at the upper left corner of the HUD interface, the color change of the figure silhouette is consistent with the color change of the figure silhouette of the healthy page label, and a player can quickly know the current state of each part of the role of the player in a battle. When the blood volume of each part reaches 0, an icon expressing "damage" appears, and the damage icon may be a red fork.
Wherein, the armour props that each position corresponds can protect respectively according to above-mentioned position division to the promotion user's of multidimension shooting experience lets the user can directly experience the influence of position blood volume change to virtual object in virtual scene.
Referring to FIG. 12, a flow chart of injury determination logic provided in an exemplary embodiment of the present application is shown. The specific steps are as follows. When the player is injured (S1201), the hit point of the player is determined (S1202). The determination of the hit part is realized by capsule bodies, and the specific hit part is determined by defining the physical material of each capsule body. If the hit part is the head or the chest (S1203), it is determined whether the blood volume corresponding to the head or the chest is 0, if not, the battle is continued to be injured (S1204), and if the blood volume is 0, it is determined that the virtual object controlled by the player dies (S1205). If the hit site is another site (S1206), it is determined whether the site is 0, if not 0, the battle injury is continued (S1207), and if the blood volume is 0, a diffuse injury is induced and transmitted to all sites where the blood volume is non-zero (S1208).
In summary, the solution shown in the embodiment of the present application determines the collision situation between the extension ray and the limb of the virtual object by performing ray detection on the virtual bullet, thereby determining the limb parts which are preferentially modified by the designated attribute values so as to modify the respective designated attribute values of the limb parts, thereby solving the problem that the injury of the virtual object caused by using the four limbs to shield the thorax part and the head and neck part shares the four limbs parts, for example, the collision between the four limbs at the thorax part and the head and neck part and the limb parts through the extension ray, the limb parts preferentially modified by the designated attribute value can be determined as a chest part and a head and neck part, therefore, the survival time of the virtual object in the process of being hit is reduced, unnecessary prolonging of the duration time of the virtual scene is avoided, and the electric quantity and the data flow consumed by the terminal are saved.
Fig. 13 is a block diagram illustrating a virtual object attribute value control apparatus according to an exemplary embodiment of the present application, where the apparatus may be disposed in the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or another terminal in the implementation environment, and the apparatus includes:
a ray determining module 1310, configured to, in response to a designated limb portion in a virtual object being hit by a virtual bullet, perform ray extension on a trajectory of the virtual bullet, and determine an extended ray as an extended ray; the virtual object comprises at least two limb portions;
a collision obtaining module 1320, configured to obtain collision situations between the extended ray and the at least two limb portions of the virtual object;
a location determining module 1330, configured to determine an actual hitting location according to the collision situation; the actual hit part is the limb part which is modified preferentially according to the designated attribute value;
an attribute value modification module 1340, configured to modify the assigned attribute values of the at least two limb parts according to the actual hit part.
In one possible implementation, the at least two limb locations include at least one of a thoracic location, a head and neck location, a limb location, and a gastric location;
the designated limb portion includes the limb portion.
In one possible implementation, the location determining module 1330 includes:
and the first part determining submodule is used for responding to the collision between the extension ray and the chest part or between the extension ray and the head and neck part, and determining the limb part collided with the extension ray as the actual hitting part.
In one possible implementation, the location determining module 1330 includes:
and the second part determining submodule is used for responding to the collision of the extension ray and the limb part or the stomach part and determining the appointed limb part as the actual hit part.
In one possible implementation, the location determining module 1330 includes:
and the third part determining sub-module is used for responding to the fact that the extension ray does not collide with the at least two limb parts, and determining the designated limb part as the actual hitting part.
In one possible implementation, the attribute value modification module 1340 includes:
a first modification submodule, configured to deduct the specified attribute value of the actual hit location in response to that the actual hit location is the first type limb location; the first type limb site comprises at least one of the chest site, the head and neck site;
and the state determination submodule is used for determining that the virtual object is in a defeated state in response to the reduction of the designated attribute value to zero.
In one possible implementation, the attribute value modification module 1340 includes:
a second modification submodule, configured to subtract the specified attribute value of the actual hit location in response to the actual hit location being a second type limb location; the second type of limb portion comprises at least one of the limb portion and the stomach portion;
a residual value determination submodule, configured to determine a residual loss value in response to a decrease of the specified attribute value of the actual hit location to zero; the residual loss value is the difference value between the loss value corresponding to the virtual bullet and the specified attribute value of the actual hit part;
the mean value obtaining sub-module is used for responding to the existence of the residual loss numerical value, equally dividing the residual loss numerical value according to the number of other limb parts and obtaining a residual loss mean value;
and the mean value deduction sub-module is used for deducting the residual loss mean value from the designated attribute values of other limb parts respectively.
In one possible implementation, the apparatus further includes:
the gain determining module is used for responding to the fact that the virtual object uses the designated virtual prop and determining a total gain value corresponding to the designated virtual prop;
a loss part acquisition module, configured to acquire the limb part with the lost specified attribute value as each attribute loss part;
a value determining module, configured to sequentially allocate the total gain value according to a priority corresponding to each attribute loss part, and determine a gain value corresponding to each attribute loss part;
and the attribute value reply module is used for sequentially replying the designated attribute values of the attribute loss positions based on the gain values corresponding to the attribute loss positions.
In one possible implementation manner, the numerical value determination module includes:
the priority part determining submodule is used for determining a main gain part in each attribute loss part based on touch operation;
a main value determining submodule for assigning the gain value corresponding to the main gain section from the total gain value;
the priority obtaining submodule is used for responding to the attribute loss value corresponding to the gain total value larger than the main gain part and obtaining the priorities corresponding to other attribute loss parts;
the value determination submodule is used for sequentially distributing the gain values to other attribute loss parts from the residual gain values according to the priorities corresponding to the other attribute loss parts; the residual gain value is a difference between the total gain value and an attribute loss value corresponding to the main gain portion.
In one possible implementation manner, the preferential location determination submodule includes:
the control display unit is used for respectively displaying and selecting the control corresponding to each attribute loss part;
and the priority part determining unit is used for responding to the received touch operation of a target control in the selection controls and determining the attribute loss part corresponding to the target control as the main gain part.
In one possible implementation, the apparatus further includes:
the icon display module is used for displaying the attribute value display icon corresponding to the virtual object; the attribute value display icon is used for displaying the at least two limb parts and the designated attribute values of the limb parts;
and the icon superposition module is used for responding to the condition that the designated attribute value of the limb part is zero, and superposing and displaying the damaged icon on the attribute value display icon.
In summary, the solution shown in the embodiment of the present application determines the collision situation between the extension ray and the limb of the virtual object by performing ray detection on the virtual bullet, thereby determining the limb parts which are preferentially modified by the designated attribute values so as to modify the respective designated attribute values of the limb parts, thereby solving the problem that the injury of the virtual object caused by using the four limbs to shield the thorax part and the head and neck part shares the four limbs parts, for example, the collision between the four limbs at the thorax part and the head and neck part and the limb parts through the extension ray, the limb parts preferentially modified by the designated attribute value can be determined as a chest part and a head and neck part, therefore, the survival time of the virtual object in the process of being hit is reduced, unnecessary prolonging of the duration time of the virtual scene is avoided, and the electric quantity and the data flow consumed by the terminal are saved.
FIG. 14 is a block diagram illustrating the structure of a computer device 1400 in accordance with an exemplary embodiment. The computer device 1400 may be a user terminal, such as a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a laptop computer, or a desktop computer. Computer device 1400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1400 includes: a processor 1401, and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one instruction for execution by processor 1401 to implement all or part of the steps of a method provided by the method embodiments herein.
In some embodiments, computer device 1400 may also optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera assembly 1406, audio circuitry 1407, a positioning assembly 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, providing the front panel of the computer device 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the computer device 1400 or in a folded design; in still other embodiments, the display 1405 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations on the computer device 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The Location component 1408 is operable to locate a current geographic Location of the computer device 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Global Positioning System (GPS) in the united states, the beidou System in china, the Global Navigation Satellite System (GLONASS) in russia, or the galileo System in europe.
The power supply 1409 is used to power the various components of the computer device 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the computer apparatus 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the touch display screen to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the computer device 1400, and the gyro sensor 1412 may cooperate with the acceleration sensor 1411 to collect a 3D motion of the user on the computer device 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1413 may be disposed on the side bezel of computer device 1400 and/or underlying touch display screen. When the pressure sensor 1413 is disposed on the side frame of the computer device 1400, the user's holding signal to the computer device 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the touch display screen, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 1414 may be disposed on the front, back, or side of the computer device 1400. When a physical key or vendor Logo is provided on the computer device 1400, the fingerprint sensor 1414 may be integrated with the physical key or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 can control the display brightness of the touch display screen based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is higher, the display brightness of the touch display screen is increased; and when the ambient light intensity is lower, the display brightness of the touch display screen is reduced. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.
A proximity sensor 1416, also known as a distance sensor, is typically provided on the front panel of the computer device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the computer device 1400. In one embodiment, the touch display screen is controlled by the processor 1401 to switch from a bright screen state to a dark screen state when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually decreased; when the proximity sensor 1416 detects that the distance between the user and the front of the computer device 1400 is gradually increasing, the processor 1401 controls the touch display screen to switch from the rest screen state to the bright screen state.
Those skilled in the art will appreciate that the architecture shown in FIG. 14 is not intended to be limiting of the computer device 1400, and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a non-transitory computer readable storage medium including instructions, such as a memory including at least one instruction, at least one program, set of codes, or set of instructions, executable by a processor to perform all or part of the steps of the method illustrated in the corresponding embodiment of fig. 3 or 5 is also provided. For example, the non-transitory computer readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the virtual object attribute value control method provided in the various alternative implementations of the above-described aspect.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A virtual object attribute value control method, characterized in that the method comprises:
in response to the fact that a designated limb part in a virtual object is hit by a virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; the virtual object comprises at least two limb portions;
acquiring collision conditions of the extension ray and the at least two limb parts of the virtual object;
determining an actual hitting position according to the collision condition; the actual hit part is the limb part which is modified preferentially according to the designated attribute value;
and modifying the designated attribute values of the at least two limb parts according to the actual hitting part.
2. The method of claim 1, wherein the at least two limb locations include at least one of a thoracic location, a head and neck location, a limb location, and a gastric location;
the designated limb portion includes the limb portion.
3. The method of claim 2, wherein determining an actual hit location based on the collision includes:
and in response to the collision of the extension ray with the chest part or the head and neck part, determining the limb part collided with the extension ray as the actual hit part.
4. The method of claim 2, wherein determining an actual hit location based on the collision includes:
and determining the designated limb part as the actual hit part in response to the collision of the extension ray with the limb part or the stomach part.
5. The method of claim 2, wherein determining an actual hit location based on the collision includes:
in response to the extended ray not colliding with the at least two limb parts, determining the designated limb part as the actual hit part.
6. The method according to claim 2, wherein the modifying the assigned attribute values for each of the at least two limb portions according to the actual hit location comprises:
in response to the actual hit being a first type limb portion, deducting the specified attribute value of the actual hit; the first type limb site comprises at least one of the chest site, the head and neck site;
determining that the virtual object is in a defeated state in response to the specified attribute value decreasing to zero.
7. The method according to claim 2, wherein the modifying the assigned attribute values for each of the at least two limb portions according to the actual hit location comprises:
deducting the specified attribute value of the actual hit location in response to the actual hit location being a second type limb location; the second type of limb portion comprises at least one of the limb portion and the stomach portion;
determining a residual loss value in response to the designated attribute value for the actual hit location decreasing to zero; the residual loss value is the difference value between the loss value corresponding to the virtual bullet and the specified attribute value of the actual hit part;
in response to the existence of the residual loss value, equally dividing the residual loss value according to the number of other limb parts to obtain a residual loss average value;
and respectively deducting the residual loss mean value from the designated attribute values of other limb parts.
8. The method of claim 1, further comprising:
responding to the virtual object to use a designated virtual prop, and determining a total gain value corresponding to the designated virtual prop;
acquiring the limb part with the loss of the specified attribute value as each attribute loss part;
sequentially distributing the total gain value according to the priority corresponding to each attribute loss part, and determining the gain value corresponding to each attribute loss part;
and sequentially replying the designated attribute values of the attribute loss positions based on the gain values corresponding to the attribute loss positions.
9. The method of claim 8, wherein said assigning the total number of gains in order according to the priority corresponding to each of the attribute loss locations, and determining the gain value corresponding to each of the attribute loss locations comprises:
determining a main gain part in each attribute loss part based on touch operation;
assigning the gain value corresponding to the primary gain section from the total gain value;
responding to the attribute loss value corresponding to the main gain part and the gain total value, and acquiring the priority corresponding to other attribute loss parts;
according to the priorities corresponding to other attribute loss parts, distributing the gain values to other attribute loss parts in sequence from the residual gain values; the residual gain value is a difference between the total gain value and an attribute loss value corresponding to the main gain portion.
10. The method of claim 9, wherein determining the primary gain location of each of the attribute loss locations based on the touch operation comprises:
respectively displaying a selection control corresponding to each attribute loss part;
and determining the attribute loss part corresponding to the target control as the main gain part in response to receiving the touch operation of the target control in the selection controls.
11. The method of claim 1, further comprising:
displaying an attribute value display icon corresponding to the virtual object; the attribute value display icon is used for displaying the at least two limb parts and the designated attribute values of the limb parts;
and responding to the designated attribute value of the limb part being zero, and displaying a damage icon on the attribute value display icon in an overlapping mode.
12. A virtual object attribute value control method, characterized in that the method comprises:
displaying a virtual scene picture, wherein the virtual scene picture comprises a virtual object and a specified attribute value of the virtual object; the virtual object comprises at least two limb portions; the specified attribute values are corresponding to each of the at least two limb portions;
responding to the fact that a designated limb part in the virtual object is hit by a virtual bullet, and displaying the designated attribute values of the at least two limb parts after being modified respectively according to an actual hit part; the actual hitting position is determined according to the collision condition of the extension ray and the at least two limb positions of the virtual object; the extension ray is obtained by performing ray extension on the running track of the virtual bullet.
13. An apparatus for controlling a property value of a virtual object, the apparatus comprising:
the ray determination module is used for responding to the fact that a designated limb part in a virtual object is hit by a virtual bullet, performing ray extension on the running track of the virtual bullet, and determining the extended ray as an extended ray; the virtual object comprises at least two limb portions;
a collision obtaining module, configured to obtain collision situations between the extended ray and the at least two limb portions of the virtual object;
the part determining module is used for determining an actual hitting part according to the collision condition; the actual hit part is the limb part which is modified preferentially according to the designated attribute value;
and the attribute value modification module is used for modifying the respective assigned attribute values of the at least two limb parts according to the actual hit part.
14. A computer device comprising a processor and a memory, said memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, said at least one instruction, said at least one program, said set of codes, or set of instructions being loaded and executed by said processor to implement a virtual object property value control method according to any one of claims 1 to 12.
15. A computer-readable storage medium, having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the virtual object property value control method according to any one of claims 1 to 12.
CN202011103315.0A 2020-10-15 2020-10-15 Virtual object attribute value control method, computer device, and storage medium Active CN112138374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011103315.0A CN112138374B (en) 2020-10-15 2020-10-15 Virtual object attribute value control method, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011103315.0A CN112138374B (en) 2020-10-15 2020-10-15 Virtual object attribute value control method, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN112138374A true CN112138374A (en) 2020-12-29
CN112138374B CN112138374B (en) 2023-03-28

Family

ID=73951988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011103315.0A Active CN112138374B (en) 2020-10-15 2020-10-15 Virtual object attribute value control method, computer device, and storage medium

Country Status (1)

Country Link
CN (1) CN112138374B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113018860A (en) * 2021-04-15 2021-06-25 腾讯科技(深圳)有限公司 Picture display method and device, terminal equipment and storage medium
CN113713392A (en) * 2021-08-27 2021-11-30 腾讯科技(深圳)有限公司 Control method and device of virtual role, storage medium and electronic equipment
WO2023231629A1 (en) * 2022-05-30 2023-12-07 腾讯科技(深圳)有限公司 Method and apparatus for displaying virtual object, and device, medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101213002A (en) * 2005-06-28 2008-07-02 科乐美数码娱乐株式会社 Game system, control method thereof, game device, and program
US20170368460A1 (en) * 2016-06-28 2017-12-28 Hothead Games Inc. Systems and methods for customized camera views in virtualized environments
CN110523080A (en) * 2019-09-05 2019-12-03 腾讯科技(深圳)有限公司 Shooting display methods, device, equipment and storage medium based on virtual environment
CN111228812A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101213002A (en) * 2005-06-28 2008-07-02 科乐美数码娱乐株式会社 Game system, control method thereof, game device, and program
US20170368460A1 (en) * 2016-06-28 2017-12-28 Hothead Games Inc. Systems and methods for customized camera views in virtualized environments
CN110523080A (en) * 2019-09-05 2019-12-03 腾讯科技(深圳)有限公司 Shooting display methods, device, equipment and storage medium based on virtual environment
CN111228812A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Virtual object control method, device, terminal and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
互联网: "绝地求生子弹新加入穿透效果,模拟现实子弹,打那些部位可以穿透呢?新模式即将上线", 《HTTPS://PAGE.OM.QQ.COM/PAGE/OIVU0OBBMHLQBZLMDOI4M2AG0》 *
互联网: "逃离塔科夫 身体系统和伤害治疗", 《HTTPS://WWW.BILIBILI.COM/VIDEO/AV92904028?ZW》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113018860A (en) * 2021-04-15 2021-06-25 腾讯科技(深圳)有限公司 Picture display method and device, terminal equipment and storage medium
CN113713392A (en) * 2021-08-27 2021-11-30 腾讯科技(深圳)有限公司 Control method and device of virtual role, storage medium and electronic equipment
CN113713392B (en) * 2021-08-27 2023-07-14 腾讯科技(深圳)有限公司 Virtual character control method and device, storage medium and electronic equipment
WO2023231629A1 (en) * 2022-05-30 2023-12-07 腾讯科技(深圳)有限公司 Method and apparatus for displaying virtual object, and device, medium and program product

Also Published As

Publication number Publication date
CN112138374B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN110585710B (en) Interactive property control method, device, terminal and storage medium
US20220152501A1 (en) Virtual object control method and apparatus, device, and readable storage medium
US20230244373A1 (en) Method and apparatus for controlling virtual object to drop virtual item and medium
CN111589142A (en) Virtual object control method, device, equipment and medium
CN111659117B (en) Virtual object display method and device, computer equipment and storage medium
CN110755841A (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110721469B (en) Method, terminal and medium for shielding virtual object in virtual environment
CN112138374B (en) Virtual object attribute value control method, computer device, and storage medium
CN112169325B (en) Virtual prop control method and device, computer equipment and storage medium
CN111744184B (en) Control showing method in virtual scene, computer equipment and storage medium
CN111589124A (en) Virtual object control method, device, terminal and storage medium
CN111330274B (en) Virtual object control method, device, equipment and storage medium
CN112076469A (en) Virtual object control method and device, storage medium and computer equipment
CN111282266B (en) Skill aiming method, device, terminal and storage medium in three-dimensional virtual environment
CN110755844B (en) Skill activation method and device, electronic equipment and storage medium
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN112138384A (en) Using method, device, terminal and storage medium of virtual throwing prop
CN111672102A (en) Virtual object control method, device, equipment and storage medium in virtual scene
CN113713382A (en) Virtual prop control method and device, computer equipment and storage medium
CN113509714A (en) Virtual item synthesis preview method, device, terminal and storage medium
CN112316421A (en) Equipment method, device, terminal and storage medium of virtual prop
CN112354180A (en) Method, device and equipment for updating integral in virtual scene and storage medium
CN111330277A (en) Virtual object control method, device, equipment and storage medium
CN112717410B (en) Virtual object control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant