CN113041622B - Method, terminal and storage medium for throwing virtual throwing object in virtual environment - Google Patents

Method, terminal and storage medium for throwing virtual throwing object in virtual environment Download PDF

Info

Publication number
CN113041622B
CN113041622B CN202110439760.2A CN202110439760A CN113041622B CN 113041622 B CN113041622 B CN 113041622B CN 202110439760 A CN202110439760 A CN 202110439760A CN 113041622 B CN113041622 B CN 113041622B
Authority
CN
China
Prior art keywords
virtual
point
throwing
delivery
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110439760.2A
Other languages
Chinese (zh)
Other versions
CN113041622A (en
Inventor
黄晓权
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110439760.2A priority Critical patent/CN113041622B/en
Publication of CN113041622A publication Critical patent/CN113041622A/en
Application granted granted Critical
Publication of CN113041622B publication Critical patent/CN113041622B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/573Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using trajectories of game objects, e.g. of a golf ball according to the point of impact
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method, a terminal and a storage medium for throwing a virtual throwing object in a virtual environment, and belongs to the technical field of computers. The method comprises the following steps: responding to the triggering operation of the target skill control, and controlling the first virtual object to hold the drop point marking prop; the method comprises the steps that virtual rays are emitted through a point-of-putting marker prop, and the point-of-putting is displayed in a virtual environment based on the virtual rays; and responding to the throwing operation, and controlling the virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing point. In the embodiment of the application, the virtual ray is utilized to select the throwing point, so that a user can clearly determine the specific position of the throwing point in the virtual environment, and the virtual carrier is utilized to throw, so that the throwing accuracy can be improved, the virtual throwing object is prevented from deviating from the ideal throwing point when the first virtual object is controlled to be thrown directly, and the practicability of the virtual throwing object is further improved.

Description

Method, terminal and storage medium for throwing virtual throwing object in virtual environment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method, a terminal and a storage medium for throwing virtual throwing objects in a virtual environment.
Background
The fight game is a game in which a plurality of user accounts play a competition in the same scene, a player can control a virtual object in a virtual environment to walk, run, climb, shoot and the like, and a plurality of players can cooperatively complete a task in the same virtual environment by online team members, and a player wins by defeating an opponent camp.
In the related art, in order to facilitate a player to control a virtual object to remotely attack an enemy at a distance, a throwing type virtual prop with strong attack force, long attack distance and large attack range, such as a bundling bomb, a bundling missile, etc., is generally provided, and the player controls the virtual object to throw the bundling bomb to a place in a visual field range, so that continuous multiple explosions occur in a range selected by the player, and more players can be quickly defeated.
However, in the related art, when a player throws a throwing object, a throwing point is selected through a map control, or a virtual object is directly controlled to throw in a certain direction, so that the throwing to a distant enemy cannot be accurately performed, and the practical rate is low.
Disclosure of Invention
The embodiment of the application provides a method, a terminal and a storage medium for throwing a virtual throwing object in a virtual environment, which can improve the throwing accuracy and the practicability of the virtual throwing object. The technical scheme is as follows:
In one aspect, an embodiment of the present application provides a method for throwing a virtual throwing object in a virtual environment, where the method includes:
responding to the triggering operation of the target skill control, controlling a first virtual object to hold a drop point marking prop, wherein the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing the attribute value of the virtual object in an action range;
transmitting a virtual ray through the launch point marking prop, and displaying the launch point in the virtual environment based on the virtual ray;
and responding to the throwing operation, and controlling a virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing point.
In another aspect, an embodiment of the present application provides a device for throwing a virtual throwing object in a virtual environment, where the device includes:
the first control module is used for responding to the triggering operation of the target skill control, controlling the first virtual object to hold a drop point marking prop, wherein the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing the attribute value of the virtual object in the action range;
The first display module is used for emitting virtual rays through the drop point marking prop and displaying the drop point in the virtual environment based on the virtual rays;
and the throwing module is used for responding to throwing operation and controlling the virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing point.
In another aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor and a memory; the memory stores at least one instruction, at least one program, a code set or an instruction set, and the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor to implement the method for throwing the virtual throwing object in the virtual environment according to the above aspect.
In another aspect, embodiments of the present application provide a computer readable storage medium having at least one computer program stored therein, where the computer program is loaded and executed by a processor to implement a method for throwing a virtual throwing object in a virtual environment as described in the above aspect.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal implements the method of throwing a virtual throwing object in a virtual environment provided in various optional implementations of the above aspect.
The beneficial effects that technical scheme that this application embodiment provided include at least:
in this embodiment of the application, the user controls the first virtual object to release the target skill, marks the prop through the release point that can launch virtual ray, directly selects the release point of virtual throwing object in virtual environment, utilizes virtual ray to select the release point, makes the user can definitely release the specific position of point in virtual environment to utilize virtual carrier to put in, can improve the input accuracy, avoid controlling the first virtual object and put in when the direct release of virtual throwing object and deviate from ideal release point, and then improved the practicality of virtual throwing object.
In addition, the point of putting is selected in the virtual environment by the point of putting marking prop capable of emitting the virtual rays, so that the interactivity and the interestingness of putting operation are improved compared with the mode of selecting the point of putting in map controls.
Drawings
FIG. 1 is a schematic illustration of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method for throwing virtual throws in a virtual environment provided by an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a trigger target skill control provided in one exemplary embodiment of the present application;
FIG. 4 is a flow chart of a method for throwing virtual throws in a virtual environment provided in another exemplary embodiment of the present application;
FIG. 5 is a schematic diagram showing an illegal delivery identification provided by an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram of updating a drop point location provided by an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of a switching view provided by an exemplary embodiment of the present application;
FIG. 8 is a flow chart of a method for throwing virtual throws in a virtual environment provided in another exemplary embodiment of the present application;
FIG. 9 is a schematic illustration of delivering a virtual projectile provided in accordance with an exemplary embodiment of the present application;
FIG. 10 is a schematic illustration of virtual projectile effectiveness provided in accordance with an exemplary embodiment of the present application;
FIG. 11 is a flowchart of a method for throwing virtual throws in a virtual environment provided in another exemplary embodiment of the present application;
FIG. 12 is a schematic illustration of a prop equipment interface provided in accordance with an exemplary embodiment of the present application;
FIG. 13 is a schematic diagram of a switch target skill control display state provided by an exemplary embodiment of the present application;
FIG. 14 is a schematic diagram of a target virtual object corresponding hint information provided by an exemplary embodiment of the present application;
FIG. 15 is a block diagram of a virtual projectile delivery device in a virtual environment according to one exemplary embodiment of the present application;
fig. 16 is a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
References herein to "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
First, terms involved in the embodiments of the present application will be described:
1) Virtual environment
Refers to a virtual environment that an application program displays (or provides) while running on a terminal. The virtual environment may be a simulation environment for the real world, a semi-simulation and semi-fictional three-dimensional environment, or a pure fictional three-dimensional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, and the following embodiments are exemplified by the virtual environment being a three-dimensional virtual environment, but are not limited thereto. Optionally, the virtual environment is also used for virtual environment combat between at least two virtual characters. Optionally, the virtual environment has virtual resources available for use by at least two virtual roles.
2) Virtual object
Virtual objects refer to movable objects in a virtual scene. The movable object may be at least one of a virtual character, a virtual animal, and a cartoon character. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object may be a three-dimensional stereoscopic model. Each virtual object has its own shape and volume in the three-dimensional virtual scene, occupying a portion of the space in the three-dimensional virtual scene. Optionally, the virtual character is a three-dimensional character constructed based on three-dimensional human skeleton technology, which implements different external figures by wearing different skins. In some implementations, the avatar may also be implemented using a 2.5-dimensional or 2-dimensional model, which is not limited by embodiments of the present application.
3) Virtual prop
Virtual props refer to props that virtual objects can use in a virtual environment, including at least one of virtual weapons, functional props, virtual equipment. Illustratively, a virtual prop in this application refers to a virtual weapon that is used to change a property value of a virtual object in a virtual environment. For example, virtual weapons include shooting-type virtual props such as pistols, rifles, sniper guns, arches, etc., near attack-type virtual props such as daggers, hammers, etc., and throwing-type virtual props such as mines, missiles, flash-shots, smoke shots, etc. The virtual props are virtual props which are thrown in a certain direction or place by virtual objects or other virtual carriers and are effective after reaching a throwing point or collision.
4) User Interface (UI) control
Refers to any visual control or element that can be seen on the user interface of an application, such as a picture, input box, text box, button, tab, etc., some of which respond to user operations.
The methods provided herein may be applied to virtual reality applications, three-dimensional mapping programs, military simulation programs, first person shooter games, multiplayer online tactical athletic games (Multiplayer Online Battle Arena Games, MOBA), etc., with the following embodiments being exemplified by application in games.
Games based on virtual environments often consist of one or more maps of the game world, the virtual environments in the games simulate the scenes of the real world, users can control virtual objects in the games to walk, run, jump, shoot, fight, drive, switch to use virtual props, use virtual props to hurt other virtual objects and the like in the virtual environments, wherein in order to facilitate a player to control the virtual objects to remotely attack a remote enemy, such as a cluster bomb, a cluster missile and the like, the player controls the virtual objects to throw the cluster bomb to a certain place in the visual field, so that continuous multiple explosions occur in the range selected by the player, and more players can be quickly defeated. However, in the related art, when a player puts in a throwing object, it is generally required to select a putting point in a map control, or directly control a virtual object to put in a certain direction, so that the throwing object cannot be accurately put in a distant enemy, and the practical ratio is low.
In order to solve the technical problems, the embodiment of the application provides a method for throwing a virtual throwing object in a virtual environment, which adds a target skill to a first virtual object, when a terminal receives a trigger operation of a target skill control, the first virtual object is controlled to hold a throwing point marking prop, the throwing point marking prop can emit virtual rays in the virtual environment, the terminal determines the position of the throwing point based on the virtual rays, and displays the virtual rays and the throwing point in a virtual environment picture, so that a user can intuitively determine the position of the throwing point, and can adjust the throwing point marking prop by controlling the first virtual object to change the throwing point, and when receiving the throwing operation, the virtual carrier is controlled to throw the virtual throwing object based on the throwing point, thereby improving the throwing accuracy.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by one embodiment of the present application. The implementation environment may include: a first terminal 110, a server 120, and a second terminal 130.
The first terminal 110 is installed and operated with an application 111 supporting a virtual environment, and when the first terminal operates the application 111, a user interface of the application 111 is displayed on a screen of the first terminal 110. The application 111 may be any of a military Simulation program, a MOBA Game, a fleeing Game, a simulated strategy Game (SLG). In the present embodiment, the application 111 is exemplified as a Role-Playing Game (RPG). The first terminal 110 is a terminal used by the first user 112, and the first user 112 uses the first terminal 110 to control a first virtual object located in the virtual environment to perform activities, where the first virtual object may be referred to as a master virtual object of the first user 112. The activities of the first virtual object include, but are not limited to: adjusting at least one of body posture, crawling, walking, running, riding, flying, jumping, driving, picking up, shooting, attacking, throwing, releasing skills. Illustratively, the first virtual object is a first virtual character, such as an emulated character or a cartoon character.
The second terminal 130 is installed and operated with an application 131 supporting a virtual environment, and when the second terminal 130 operates the application 131, a user interface of the application 131 is displayed on a screen of the second terminal 130. The client may be any of a military simulation program, a MOBA game, a fleeing game, a SLG game, in this embodiment, the application 131 is exemplified as an RPG. The second terminal 130 is a terminal used by the second user 132, and the second user 132 uses the second terminal 130 to control a second virtual object located in the virtual environment to perform activities, and the second virtual object may be referred to as a master virtual character of the second user 132. Illustratively, the second virtual object is a second virtual character, such as an emulated character or a cartoon character.
Optionally, the first virtual object and the second virtual object are in the same virtual world. Optionally, the first virtual object and the second virtual object may belong to the same camp, the same team, the same organization, have a friend relationship, or have temporary communication rights. Alternatively, the first virtual object and the second virtual object may belong to different camps, different teams, different organizations, or have hostile relationships. In this embodiment, the first virtual object and the second virtual object belong to the same camping as an example.
Alternatively, the applications installed on the first terminal 110 and the second terminal 130 are the same, or the applications installed on the two terminals are the same type of application on different operating system platforms (android or IOS). The first terminal 110 may refer broadly to one of the plurality of terminals and the second terminal 130 may refer broadly to another of the plurality of terminals, the present embodiment being illustrated with only the first terminal 110 and the second terminal 130. The device types of the first terminal 110 and the second terminal 130 are the same or different, and the device types include: at least one of a smart phone, a tablet computer, an electronic book reader, an MP3 player, an MP4 player, a laptop portable computer, and a desktop computer.
Only two terminals are shown in fig. 1, but in different embodiments there are a number of other terminals that can access the server 120. Optionally, there is one or more terminals corresponding to the developer, on which a development and editing platform for supporting the application program of the virtual environment is installed, the developer may edit and update the application program on the terminal, and transmit the updated application program installation package to the server 120 through a wired or wireless network, and the first terminal 110 and the second terminal 130 may download the application program installation package from the server 120 to implement the update of the application program.
The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 through a wireless network or a wired network.
The server 120 includes at least one of a server, a server cluster formed by a plurality of servers, a cloud computing platform and a virtualization center. The server 120 is used to provide background services for applications supporting a three-dimensional virtual environment. Optionally, the server 120 takes on primary computing work and the terminal takes on secondary computing work; alternatively, the server 120 takes on secondary computing work and the terminal takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 120 and the terminals.
In one illustrative example, server 120 includes memory 121, processor 122, user account database 123, combat service module 124, and user-oriented Input/Output Interface (I/O Interface) 125. Wherein the processor 122 is configured to load instructions stored in the server 120, process data in the user account database 123 and the combat service module 124; the user account database 123 is used for storing data of user accounts used by the first terminal 110, the second terminal 130 and other terminals, such as an avatar of the user account, a nickname of the user account, and a combat index of the user account, where the user account is located; the combat service module 124 is configured to provide a plurality of combat rooms for users to combat, such as 1V1 combat, 3V3 combat, 5V5 combat, etc.; the user-oriented I/O interface 125 is used to establish communication exchanges of data with the first terminal 110 and/or the second terminal 130 via a wireless network or a wired network.
Fig. 2 is a flowchart illustrating a method for throwing a virtual throwing object in a virtual environment according to an exemplary embodiment of the present application. This embodiment will be described by taking the example that the method is used for the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or other terminals in the implementation environment, and the method includes the following steps:
in step 201, in response to a triggering operation on the target skill control, a first virtual object is controlled to hold a drop point marking prop, the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing an attribute value of the virtual object in an action range.
The target skill control is a trigger control corresponding to the target skill, when the trigger operation of the target skill control is received, the terminal controls the first virtual object to use the target skill, namely controls the first virtual object to hold the drop point marking prop, and if the first virtual object holds other virtual props before the trigger operation of the target skill control is received, the other virtual props are switched to the drop point marking prop.
Optionally, the triggering operation of the target skill control includes clicking operation, long press operation, dragging operation, or voice control operation of the UI control.
The drop point marker prop is a virtual prop for selecting a drop point of a virtual projectile in a virtual environment. And the user controls the first virtual object to hold the mark point to launch the prop, so that the first virtual object selects the launch point in the virtual environment.
The virtual throwing object is a throwing type virtual object for changing the attribute value of the virtual object in the action range, such as a grenade, a bomb, a missile and the like, and can be thrown directly by the virtual object or through a virtual carrier.
Illustratively, as shown in fig. 3, a target skill control 301 is displayed in a virtual environment interface (i.e., a game interface), and when a trigger operation for the target skill control 301 is received, the terminal controls the first virtual object to retract the virtual prop 302 currently used, and uses the target skill to hold the drop point marking prop 303.
And 202, transmitting virtual rays through the point-of-delivery marking props, and displaying the point-of-delivery in the virtual environment based on the virtual rays.
In one possible implementation, since the virtual projectile belongs to a remote attack prop, the user typically needs to attack a remote virtual object when using the virtual projectile, and thus needs to determine the point of placement at a location further from the first virtual object, for example. In order to facilitate the user to control the first virtual object to remotely select the drop point, the drop point marker prop can emit a virtual ray (e.g., a virtual beam), and the terminal displays the drop point in the virtual environment based on the virtual ray. And the user can select the drop point by observing the position, the orientation and other information of the virtual ray in the virtual environment, so that the operation is more visual and convenient.
As shown in fig. 3, point-of-delivery marking prop 303 emits a virtual ray (shown in phantom) that is visible in the interface, and may be embodied as a beam of light.
In step 203, in response to the throwing operation, the virtual vehicle is controlled to throw the virtual throwing object into the virtual environment based on the throwing point.
When receiving the throwing operation, the terminal determines to throw the virtual throwing object to the current throwing point, so that the virtual carrier is controlled to throw the virtual throwing object to the virtual environment based on the throwing point. The virtual carrier is a carrier for throwing a virtual throwing object into a virtual environment, such as a virtual plane, a virtual parachute, a virtual tank, and the like.
For example, when the terminal receives a trigger operation to the launch control 304, the virtual vehicle is controlled to launch a virtual projectile into the virtual environment.
Optionally, the terminal controls the virtual carrier to throw in the virtual throwing object once based on the throwing point, or the terminal controls the virtual carrier to throw in the virtual throwing object once every preset time interval and throw in the virtual throwing object n times continuously, wherein n is a positive integer.
Optionally, the launch operation is a trigger operation of the launch control; or, performing secondary triggering operation on the target skill control; alternatively, other touch operations (e.g., a click operation, a slide operation, etc.) in the virtual environment interface may be acted upon, which embodiments of the present application are not limited.
To sum up, in the embodiment of the application, the user controls the first virtual object to release the target skill, marks the prop through the putting point capable of emitting the virtual ray, directly selects the putting point of the virtual throwing object in the virtual environment, selects the putting point by using the virtual ray, enables the user to clearly determine the specific position of the putting point in the virtual environment, and puts by using the virtual carrier, so that the putting accuracy can be improved, the virtual throwing object is prevented from deviating from the ideal putting point when the first virtual object is directly put, and the practicability of the virtual throwing object is further improved.
In addition, the point of putting is selected in the virtual environment by the point of putting marking prop capable of emitting the virtual rays, so that the interactivity and the interestingness of putting operation are improved compared with the mode of selecting the point of putting in map controls.
In one possible implementation, the terminal determines and displays the drop point based on the virtual ray emitted by the drop point marking prop, and the user can implement a change to the drop point by adjusting the drop point marking prop. Referring to fig. 4, a flowchart of a method for throwing a virtual throwing object in a virtual environment according to another exemplary embodiment of the present application is shown. This embodiment will be described by taking the example that the method is used for the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or other terminals in the implementation environment, and the method includes the following steps:
In step 401, in response to a triggering operation on the target skill control, a first virtual object is controlled to hold a drop point marking prop, the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing an attribute value of the virtual object in an action range.
For specific embodiments of step 401, reference may be made to step 201 described above, and the embodiments of the present application are not repeated here.
Step 402, emitting a virtual ray through the point of delivery marking prop, wherein the direction of the virtual ray is consistent with the direction of the point of delivery marking prop.
In one possible implementation, in order to facilitate the user to control the virtual ray by controlling the first virtual object to hold the drop point marker prop, the reality of the interactive operation is improved, and the pointing direction of the terminal control virtual ray in the virtual environment is consistent with the pointing direction of the drop point marker prop. If the direction of the point marker prop changes, the direction of the virtual ray also changes, so that a user can select the point by controlling the point marker prop.
For example, the drop point marker prop is displayed as a preset prop model, such as a flashlight, a laser pen, etc., as shown in fig. 3, the drop point marker prop 303 is a virtual flashlight, the virtual ray is a virtual beam emitted by the virtual flashlight, and the virtual ray and the drop point marker prop 303 have the same direction.
And step 403, determining an intersection point of the virtual ray and the virtual object in the virtual environment as a drop point, and displaying a drop point identifier at the drop point.
In a virtual environment, a virtual ray emitted by a point-of-placement marking prop is intersected with a virtual object (such as a ground, a wall, a tree, a virtual object and the like), so that an intersection point is formed, the terminal determines the intersection point as a point-of-placement, and a point-of-placement mark is displayed at the point-of-placement to prompt a user of the current position of the point-of-placement, so that the user can conveniently determine whether adjustment is needed.
Illustratively, as shown in FIG. 3, the virtual ray from the drop point marker prop 303 intersects the ground in the virtual environment, and the terminal displays a drop point identifier 305 at the intersection.
Since in the real world the range that can be observed by the human eye is limited, in order to increase the authenticity of throwing the object, the first virtual object can only select the point of throwing within a preset range, in one possible implementation, step 403 comprises the steps of:
and determining the intersection point as the drop point and displaying the drop point identification at the drop point in response to the distance between the intersection point and the position of the first virtual object being smaller than a first distance threshold.
The terminal determines the distance between the position of the intersection point of the virtual ray and the virtual object and the position of the first virtual object in real time, wherein the distance refers to the distance in the virtual environment, and if the distance is smaller than a first distance threshold (for example, 100 meters), the terminal displays a throwing point identifier at the intersection point so as to indicate that the virtual throwing object can be thrown at the point currently.
When the distance between the position of the intersection point and the position of the first virtual object exceeds a first distance threshold, the first virtual object cannot throw the virtual throwing object at the intersection point, and the method further comprises the following steps:
and displaying the illegal delivery identification at the intersection point in response to the distance between the intersection point and the position of the first virtual object being greater than a first distance threshold.
As shown in fig. 5, when the terminal determines that the distance between the intersection point and the position where the first virtual object is located is greater than the first distance threshold (100 meters), an illegal throwing identifier 501 is displayed at the intersection point, so as to prompt the user that the virtual throwing object cannot be thrown at the point. In addition, the terminal can display prompt information 502 through the virtual environment interface to prompt the user to adjust the virtual ray and change the position of the intersection point.
Step 404, receiving control operation of the point of delivery marking prop, and adjusting the orientation of the point of delivery marking prop.
Optionally, the user drags the arm of the first virtual object or the drop point virtual prop, so that the terminal controls the first virtual object to change the direction of the drop point marking prop based on the dragging operation; or the display position of the drop point marking prop relative to the screen is unchanged, and the user slides the virtual environment interface to enable the terminal to change the virtual environment in the visual field range of the first virtual object based on the sliding distance and the sliding direction of the sliding operation, so that the orientation of the drop point marking prop in the virtual environment is changed. The embodiments of the present application are not limited in this regard.
Illustratively, the user performs an upward sliding operation in the virtual environment interface, so that the virtual ray is oriented in a direction close to the vertical downward direction, that is, the intersection point is close to the position where the first virtual object is located; and the user performs a downward sliding operation in the virtual environment interface, so that the virtual ray is oriented in a horizontal forward direction, namely the intersection point is far away from the position where the first virtual object is located.
Step 405, updating the position of the drop point in the virtual environment based on the orientation of the adjusted drop point marking prop.
And after receiving the control operation, the terminal updates the position of the putting point based on the direction of the putting point marking prop.
In a possible implementation manner, the user can also control the first virtual object to move in the virtual environment through the terminal, so as to change the position of the drop point. As shown in fig. 6, the first virtual object releases the target skill at a position far from the virtual building 603, holds the drop point marking prop, and displays the drop point identifier 602 at the corresponding drop point, when receiving the trigger operation on the movement control 601, controls the first virtual object to move forward based on the trigger operation, approaches the virtual building 603, and updates the position of the drop point based on the change of the position of the first virtual object.
In step 406, in response to the launch operation, the virtual vehicle is controlled to launch the virtual projectile into the virtual environment based on the launch point.
For a specific embodiment of step 406, reference may be made to step 203, which is not described herein.
In the embodiment of the application, the drop point is determined based on the intersection point of the virtual ray and the virtual object, the drop point identification is displayed at the drop point in real time, and the position of the drop point in the virtual environment is updated in time based on the control operation of the drop point marking prop, so that a user can conveniently determine the position of the drop point by observing the virtual environment interface, and whether adjustment is needed or not is judged.
In a possible implementation manner, in order to facilitate a user to select a throwing point and avoid blocking a line of sight, when the terminal receives a trigger operation on a target skill control in a display state of a third person viewing angle, the throwing method of a virtual throwing object in a virtual environment in the embodiment of the present application further includes the following steps:
and responding to the triggering operation of the target skill control, and switching the virtual environment picture from the third person viewing angle to the first person viewing angle.
In one possible implementation, the terminal displays the virtual environment through a virtual environment screen. Alternatively, the virtual environment screen is a screen in which the virtual environment is observed at the perspective of the virtual object. The angle of view refers to an observation angle at which a first person or a third person of the virtual object observes in the virtual environment. Optionally, in an embodiment of the present application, the perspective is an angle at which the virtual object is observed by the camera model in the virtual environment.
Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model simultaneously changes along with the position of the virtual object in the virtual environment, and the camera model is always within the preset distance range of the virtual object in the virtual environment. Optionally, the relative positions of the camera model and the virtual object do not change during the automatic following process.
The camera model refers to a three-dimensional model located around the virtual object in the virtual environment, which is located near or at the head of the virtual object when the first person perspective is employed; when a third person viewing angle is adopted, the camera model can be located behind the virtual object and bound with the virtual object, and can also be located at any position with a preset distance from the virtual object, and the virtual object in the virtual environment can be observed from different angles through the camera model. Optionally, the viewing angle includes other viewing angles, such as a top view, in addition to the first-person viewing angle and the third-person viewing angle; when a top view is employed, the camera model may be located above the head of the virtual object, the top view being a view of the virtual environment from an overhead view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment of the user interface display. Describing the camera model as being located at any position at a preset distance from the virtual object, optionally, one virtual object corresponds to one camera model, and the camera model may rotate with the virtual object as a rotation center, for example: the camera model is rotated by taking any point of the virtual object as a rotation center, the camera model not only rotates in angle, but also shifts in displacement in the rotation process, and the distance between the camera model and the rotation center is kept unchanged during rotation, namely, the camera model is rotated on the surface of a sphere taking the rotation center as a sphere center, wherein any point of the virtual object can be any point of the head, the trunk or the periphery of the virtual object, and the embodiment of the application is not limited. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in the direction of the center of sphere, where the point of the sphere where the camera model is located points to the center of sphere.
As shown in fig. 7, the terminal displays a virtual environment picture with a third human visual angle, when receiving a triggering operation on the target skill control 701, the terminal switches the third human visual angle to a first human visual angle, controls the camera model lens to zoom in, enables a user to select a drop point under the first human visual angle, avoids the virtual ray from being blocked by a first virtual object, and can simulate the experience of controlling the ray through props in the real environment, thereby improving interactivity and reality.
In a possible implementation manner, the embodiment of the application provides at least two throwing modes, the throwing precision corresponding to different throwing modes is different, and the terminal determines the target throwing mode based on the distance of the throwing point, so that the throwing of the virtual throwing object is more fit with reality. Fig. 8 is a flowchart illustrating a method for throwing a virtual throwing object in a virtual environment according to another exemplary embodiment of the present application. This embodiment will be described by taking the example that the method is used for the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or other terminals in the implementation environment, and the method includes the following steps:
in step 801, in response to a triggering operation on a target skill control, a first virtual object is controlled to hold a drop point marking prop, the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing an attribute value of the virtual object in an action range.
Step 802, a virtual ray is emitted through the point of delivery marking prop, and the point of delivery is displayed in the virtual environment based on the virtual ray.
For the specific embodiments of step 801 to step 802, reference may be made to the above steps 201 to 202, and the embodiments of the present application will not be repeated here.
In step 803, in response to the delivery operation, a target delivery mode is determined based on the delivery point, and different delivery modes correspond to different delivery precision.
The release distance refers to a distance between a release point and a position of a first virtual object in a virtual environment, and in the embodiment of the present application, the terminal determines a release manner of a virtual thrown object based on the release distance, for example, when the release distance is smaller than a preset distance, the terminal determines a release manner with high release precision as a target release manner, and when the release distance is greater than the preset distance, the terminal determines a release manner with low release precision as a target release manner.
In a possible implementation manner, the terminal determines the target delivery mode according to the correspondence between the delivery distance and the delivery mode, and step 803 includes the following steps:
in step 803a, in response to the delivery distance being smaller than the second distance threshold, the first delivery mode is determined to be the target delivery mode, and the delivery distance is the distance between the delivery point and the position where the first virtual object is located.
In step 803b, in response to the launch distance being greater than the second distance threshold, the second launch mode is determined to be the target launch mode.
Wherein, the delivery precision of the first delivery mode is higher than that of the second delivery mode. The throwing precision is related to the deviation between the landing position of the virtual throwing object in the virtual environment and the throwing point when the virtual throwing object is thrown by the virtual carrier. The smaller the deviation is, the higher the delivery precision is; alternatively, the smaller the probability of deviation, the higher the delivery accuracy.
In one possible implementation manner, the delivery modes in the embodiment of the application include a first delivery mode and a second delivery mode. Optionally, the first delivery mode is accurate delivery with the delivery point as the delivery center point, and the second delivery mode is range delivery of determining the delivery range based on the delivery point and the delivery distance; or, the first throwing mode and the second throwing mode are both throwing ranges of which the throwing ranges are determined based on the throwing points and the throwing distances, and the throwing ranges corresponding to the first throwing mode are smaller than the throwing ranges corresponding to the second throwing mode. The throwing center point is a datum point when the virtual carrier throws the virtual throwing object. For example, when the number of virtual throws is 1, the center point is the action point of the virtual throws; when the number of the virtual throwing objects is 2, the throwing center point is the midpoint on the connecting line of the action points of the two virtual throwing objects; when the number of virtual throwing objects is 3, the connecting lines among the action points of the three virtual throwing objects form an equilateral triangle, and the throwing center point is the midpoint of the equilateral triangle.
Step 804, determining a delivery center point based on the delivery point and the target delivery mode.
When the terminal receives the throwing operation, a throwing center point is determined in the virtual environment based on the target throwing mode and the throwing point. In one possible implementation, step 804 includes steps 804a through 804b, or steps 804c through 804d:
in step 804a, in response to the target delivery pattern being the first delivery pattern, a first delivery area is determined based on the delivery point, the first delivery area being an area centered on the delivery point.
804b, determining a drop center point from the first drop zone.
In one possible embodiment, the area and shape of the first drop zone is fixed. When the throw-in distance is smaller than the second distance threshold, the area and the shape of the first throw-in area are kept consistent regardless of the magnitude of the throw-in distance.
Illustratively, the second distance threshold is 50 meters, when the delivery distance is smaller than 50 meters, the terminal determines a circular area with a delivery point as a center and a radius of 5 meters as a first delivery area, and randomly determines a point from the first delivery area as a delivery center point.
804c, determining a second delivery area based on the delivery point and the delivery distance in response to the target delivery mode being the second delivery mode, wherein the second delivery area is centered on the delivery point, the second delivery area is larger than the first delivery area, and the area of the second delivery area is in positive correlation with the delivery distance.
804d, determining a drop center point from the second drop zone.
In one possible embodiment, when the target delivery mode is the second delivery mode, the further the delivery distance is, the larger the area of the second delivery area is.
The second delivery area is a circular area taking a delivery point as a center of a circle, and the calculation formula of the radius is r0+ (s-50)/a, wherein r0 is the radius of the first delivery area, s is the delivery distance, and a is a preset attenuation coefficient.
In one possible implementation manner, in order to facilitate other virtual objects to timely acquire the virtual throwing object to be thrown in the virtual environment, after determining the throwing center point, the terminal displays a notification message through the global message display area, where the notification message includes the position of the throwing center point, the position of the first virtual object, the duration of throwing the virtual throwing object from the virtual carrier, and the like, and the message is globally visible. Other virtual objects can reduce injuries by lying down or being far away from the putting center point, and rapid reduction of attribute values is avoided.
In step 805, the virtual vehicle is controlled to launch a virtual projectile into the virtual environment based on the launch center.
As shown in fig. 9, when receiving a drop operation, the terminal determines a drop area 901 based on a target drop mode and a position of a drop point, determines a drop center point from the drop area 901, displays a center point identifier 902 corresponding to the drop center point in a virtual environment picture, and displays a distance between the drop center point and a first virtual object, and after a preset period of time, the terminal controls the virtual carrier to drop a virtual thrown object 903 into the virtual environment based on the drop center point.
In response to the virtual projectile being validated, m child virtual projectiles are generated based on the virtual projectile, m being a positive integer, step 806.
In one possible implementation, the virtual projectile springs off when it collides with a virtual object in the virtual environment other than the ground until it lands, and begins to count down after landing, after a preset period of time (e.g., 2 s) takes effect. The virtual throwing object is effective, and corresponding special effects (such as bomb explosion) are displayed in the virtual environment picture.
In one possible implementation, after the virtual throwing object is validated, prop fragments are generated, and the terminal takes the prop fragments as the sub-virtual throwing object and controls the sub-virtual throwing object to be thrown secondarily. For example, the virtual projectile is a bomb, and m bomb fragments generated by the terminal control bomb after the terminal control bomb explodes based on the throwing center point are determined as sub virtual projectiles.
Optionally, the number of sub virtual throws generated by the target virtual throws is a fixed number, that is, m is a fixed number; alternatively, the terminal randomly determines the value of m within a certain range of values, that is, the number of sub virtual throws generated after each virtual throws take effect may be different, which is not limited in the embodiment of the present application.
In step 807, the m virtual throws are controlled to move along a preset trajectory.
In one possible implementation, the terminal controls each of the m sub-virtual throws to move along a corresponding preset trajectory. Optionally, the preset track is a fixed track, that is, the virtual throwing object moves according to the same track after each virtual throwing object takes effect, or the terminal randomly determines m preset tracks from the preset tracks, so as to control the virtual throwing object to move. Illustratively, the predetermined trajectory is a parabola starting from the point of generation of the sub-virtual throwing object.
In step 808, in response to the collision of the virtual projectile during the movement, the virtual projectile is controlled to take effect at the collision point, the virtual projectile being configured to change the value of the attribute of the virtual object within the range of action.
The terminal controls the virtual throwing object to move along the preset track, and if the virtual throwing object collides in the moving process, such as collision with a virtual building, a virtual object or the ground in a virtual environment, the virtual throwing object is controlled to immediately take effect at a collision point. In one possible implementation manner, the action range of the sub virtual throwing object is smaller than that of the virtual throwing object, for example, the virtual throwing object is a mine, the action range is a spherical area with a landing point as a center and a radius of 20m, when the virtual throwing object is effective, attribute values of virtual objects in the area are changed, mine fragments generated after the mine explosion move according to a preset track and are secondarily exploded after the impact, and the action range is a spherical area with a radius of 10m and a collision point as a center.
As shown in fig. 10, after the target virtual prop is validated at the actual launch point 1001, the terminal displays the special effect of the target virtual prop through the virtual environment picture, and generates the sub virtual prop 1002, and the terminal controls the sub virtual prop 1002 to move along the preset track, and after the collision point 1003 collides, controls the sub virtual prop to be validated at the collision point 1003.
In the embodiment of the application, the target throwing mode is determined based on the distance between the throwing point and the first virtual object, the throwing precision of the virtual throwing object is changed along with the change of the throwing distance, so that the throwing of the virtual throwing object is more fit and practical, and the reality of throwing the virtual throwing object in the virtual environment is improved; after the virtual throwing object takes effect, the virtual throwing object is controlled to be generated, and the virtual throwing object takes effect again, so that the problem that the virtual throwing object still cannot act on other virtual objects due to accurate user throwing caused by the time difference between the user selection of the throwing point and the virtual throwing object taking effect is solved.
The above embodiments illustrate the process of selecting a drop point by marking a prop by the drop point after the first virtual object releases the target skill, and further enabling the virtual carrier to drop the virtual throwing object. Fig. 11 is a flowchart illustrating a method for throwing a virtual throwing object in a virtual environment according to another exemplary embodiment of the present application. This embodiment will be described by taking the example that the method is used for the first terminal 110 or the second terminal 130 in the implementation environment shown in fig. 1 or other terminals in the implementation environment, and the method includes the following steps:
Step 1101, receiving a selection operation of a target skill prop of the at least one skill prop, equipping the first virtual object with the target skill prop, the target skill prop being configured to add a target skill to the virtual object.
The terminal displays a skill property equipment interface before entering a game or displays the skill property equipment interface when the game starts and the first virtual object does not attack the second virtual object.
Alternatively, the first virtual object can be equipped with a predetermined number of skill props in a game at a time. After the first virtual object is equipped with the skill prop, the first virtual object always has the attribute or skill of the corresponding skill prop in the subsequent single-time game process.
Referring to fig. 12, a schematic diagram of a skill prop equipment interface is shown. The skill property equipment interface includes a skill property selection column 1201, which includes skill properties owned by the first virtual object corresponding to the current account. When the terminal receives the triggering operation of the target selection control 1202 corresponding to the target skill prop, the target skill prop 1203 and the prop introduction are displayed in the skill prop equipment interface, when the triggering operation of the equipment control 1204 is received, the selection operation of the target skill prop is determined to be received, and a prop command is sent to the server to control the first virtual object to equip the target skill prop 1203.
Step 1102, displaying a target skill control corresponding to the target skill through a virtual environment interface.
In one possible implementation, the terminal displays the target skill control in the virtual environment interface when the first virtual object is equipped with the target skill prop, and does not display the target skill control when the first virtual object is not equipped with the target skill prop.
In one possible implementation, step 1102 includes the steps of:
in step 1102a, a target skill control in a non-triggerable state is displayed through a virtual environment interface.
In one possible embodiment, the target skills have a skill cooling duration. After the game starts, the target skills are in a skill cooling state, and accordingly, the target skill control is in a non-triggerable state, as shown in fig. 13, and the target skill control 1301 in the non-triggerable state cannot receive and respond to the triggering operation.
In step 1102b, in response to reaching the skill cool down period, the display state of the target skill control is switched from the non-triggerable state to the triggerable state.
When the skill cooling duration is reached, the target skill first releases the skill cooling state, at this time, the terminal switches the display state of the target skill control from the non-triggerable state to the triggerable state, as shown in fig. 13, and the target skill control 1301 in the triggerable state can receive the triggering operation.
In step 1103, in response to the triggering operation of the target skill control in the triggerable state, the first virtual object is controlled to hold the drop point marker prop.
In one possible implementation manner, when the terminal receives a triggering operation on the target skill control in the triggerable state, it is determined that the first virtual object releases the target skill, the first virtual object is controlled to hold the drop point marking prop, and at this time, the target skill enters the skill cooling state again, and the display state of the terminal target skill control is switched from the triggerable state to the non-triggerable state until the skill cooling duration is reached again. Wherein the skill cooling time in the process of the game is fixed, or can be reduced along with the increase of the target skill release times.
In step 1104, the prop is marked by the drop point to emit a virtual ray, and the drop point is displayed in the virtual environment based on the virtual ray.
In step 1105, in response to the throwing operation, the virtual vehicle is controlled to throw the virtual throwing object into the virtual environment based on the throwing point.
Specific embodiments of steps 1104 to 1105 may refer to steps 202 to 203, which are not limited in this application.
In step 1106, in response to the target virtual object being present in the preset area, a hint information is displayed in the virtual environment interface.
The target skill prop also has a investigation function for the target virtual object, the function belongs to passive skill, active triggering of a user is not needed, and the skill immediately takes effect when the first virtual object is equipped with the target virtual prop until the first virtual object unloads the target skill prop. The target virtual object is a second virtual object in a wing-mounted state, and the second virtual object and the first virtual object belong to different camps.
In one possible implementation, step 1106 includes the steps of:
in step 1106a, in response to the target virtual object being located in the first preset area, the location identifier of the target virtual object is displayed in the map display control, where the first preset area is an area centered on the location of the first virtual object.
When the target virtual object exists in the first preset area, the terminal displays the position identification of the target virtual object through the map display control, wherein the position identification is different from the position identifications of other virtual objects, for example, the position identification of the target virtual object is different from the position identifications of other virtual objects in color and/or shape, so as to remind the user that the target virtual object exists nearby and needs to be kept vigilant. As shown in fig. 14, the terminal displays the location identifier 1401 of the target virtual object through the map presentation control.
In step 1106b, in response to the target virtual object being located in the second preset area, a preset special effect is displayed at the edge of the virtual environment interface.
The second preset area is an area taking the position of the first virtual object as the center, and the second preset area is in the first preset area. For example, the first preset area is a circular area with a radius of 150m centered on the position of the first virtual object, and the second preset area is a circular area with a radius of 100m centered on the position of the first virtual object.
In one possible implementation, when the target virtual object exists in the second preset area, the terminal displays a preset special effect on the edge of the virtual environment interface to prompt the user that the target virtual object exists in a relatively short distance. As shown in fig. 14, when the target virtual object exists in the second preset area, the terminal displays a preset special effect (an area surrounded by a curve and an interface edge in the figure) at the interface edge of the virtual environment, for example, displays a red blinking special effect in a preset range.
Step 1107 controls the first virtual object to hold the target firing prop in response to the selection operation of the target firing prop.
The target skill prop also has the function of increasing the filling speed of the target firing prop (e.g., rocket cannon), which is a passive skill that does not require active triggering by the user, and which immediately takes effect when the first virtual object is equipped with the target virtual prop until the first virtual object unloads the target skill prop. The target shooting prop is used for reducing the attribute value of the hit virtual object.
In response to the priming operation, step 1108, priming the target firing prop at a target priming rate that is greater than a default priming rate, the default priming rate being a rate at which the target firing prop is primed without the target skill prop.
When the target shooting prop lacks replenishment (for example, the number of shells in the rocket gun is 0), the target shooting prop can be filled and replenished through filling operation, and if the first virtual object is provided with the target skill prop at the moment, the terminal fills the target shooting prop according to the target filling speed. For example, the target fill speed is 120% of the default fill speed.
In the embodiment of the application, the target skill prop is equipped for the first virtual object, and the target skill and the passive skill are added for the first virtual object, wherein the passive skill comprises a investigation skill for the target virtual object and a skill for improving the filling speed of the target shooting prop, so that when the target skill is not triggered and released by a user, the remote combat capability of the first virtual object can be improved, and the practicability and the utilization rate of the target skill prop are improved.
Fig. 15 is a block diagram of a device for throwing virtual throwing objects in a virtual environment according to an exemplary embodiment of the present application, the device including:
A first control module 1501, configured to control, in response to a trigger operation on a target skill control, a first virtual object to hold a drop point marking prop, where the drop point marking prop is configured to mark a drop point of a virtual projectile in a virtual environment, where the virtual projectile is configured to change an attribute value of the virtual object in an action range;
a first display module 1502 configured to emit a virtual ray through the drop point marker prop, and display the drop point in the virtual environment based on the virtual ray;
and a throwing module 1503, configured to control the virtual vehicle to throw the virtual projectile into the virtual environment based on the throwing point in response to a throwing operation.
Optionally, the first display module 1502 includes:
the first display unit is used for emitting the virtual rays through the point-of-delivery marking prop, and the direction of the virtual rays is consistent with the direction of the point-of-delivery marking prop;
and the second display unit is used for determining the intersection point of the virtual ray and the virtual object in the virtual environment as the drop point and displaying the drop point identification at the drop point.
Optionally, the second display unit is further configured to:
Determining the intersection point as the drop point and displaying the drop point identification at the drop point in response to the distance between the intersection point and the position of the first virtual object being smaller than a first distance threshold;
the apparatus further comprises:
and the second display module is used for displaying illegal delivery identifications at the intersection points in response to the fact that the distance between the intersection points and the position where the first virtual object is located is larger than the first distance threshold value.
Optionally, the apparatus further includes:
the first receiving module is used for receiving control operation of the point-of-delivery marking prop and adjusting the direction of the point-of-delivery marking prop;
and the updating module is used for updating the position of the putting point in the virtual environment based on the direction of the adjusted putting point marking prop.
Optionally, the delivering module 1503 includes:
the first determining unit is used for determining a target delivery mode based on the delivery point in response to the delivery operation, and different delivery modes correspond to different delivery precision;
the second determining unit is used for determining a delivery center point based on the delivery point and the target delivery mode;
and the throwing unit is used for controlling the virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing center point.
Optionally, the first determining unit is further configured to:
determining a first delivery mode as the target delivery mode in response to the delivery distance being smaller than a second distance threshold, wherein the delivery distance is the distance between the delivery point and the position where the first virtual object is located;
determining a second delivery mode as the target delivery mode in response to the delivery distance being greater than the second distance threshold;
the delivery precision of the first delivery mode is higher than that of the second delivery mode.
Optionally, the second determining unit is further configured to:
determining a first delivery area based on the delivery point in response to the target delivery mode being the first delivery mode, wherein the first delivery area is an area centered on the delivery point; determining the delivery center point from the first delivery zone;
responding to the target delivery mode as the second delivery mode, and determining a second delivery area based on the delivery point and the delivery distance, wherein the second delivery area is centered on the delivery point, the second delivery area is larger than the first delivery area, and the area of the second delivery area and the delivery distance are in positive correlation; and determining the putting center point from the second putting region.
Optionally, the apparatus further includes:
and the switching module is used for responding to the triggering operation of the target skill control and switching the virtual environment picture from the third person viewing angle to the first person viewing angle.
Optionally, the apparatus further includes:
the generation module is used for responding to the virtual throwing object to take effect, generating m sub virtual throwing objects based on the virtual throwing object, wherein m is a positive integer;
the second control module is used for controlling the m sub virtual throwing objects to move along a preset track;
and the third control module is used for responding to the collision of the virtual throwing object in the motion process and controlling the virtual throwing object to take effect at the collision point, and the virtual throwing object is used for changing the attribute value of the virtual object in the action range.
Optionally, the apparatus further includes:
the second receiving module is used for receiving the selection operation of a target skill prop in at least one skill prop, equipping the first virtual object with the target skill prop, and the target skill prop is used for adding target skills for the virtual object;
and the third display module is used for displaying the target skill control corresponding to the target skill through a virtual environment interface.
Optionally, the third display module includes:
a switching unit, configured to switch a display state of the target skill control from the non-triggerable state to a triggerable state in response to reaching a skill cooling duration;
the first control module 1501 includes:
and the control unit is used for responding to the triggering operation of the target skill control in the triggerable state and controlling the first virtual object to hold the drop point marking prop.
Optionally, the target skill prop has a function of detecting a target virtual object, the target virtual object is a second virtual object in a wing-mounted state, and the second virtual object and the first virtual object belong to different camps;
the apparatus further comprises:
and the fourth display module is used for responding to the existence of the target virtual object in the preset area and displaying prompt information in the virtual environment interface.
Optionally, the fourth display module includes:
the third display unit is used for responding to the fact that the target virtual object is located in a first preset area, and displaying the position identification of the target virtual object in a map display control, wherein the first preset area is an area taking the position of the first virtual object as the center;
The fourth display unit is used for responding to the fact that the target virtual object is located in a second preset area, displaying a preset special effect on the edge of the virtual environment interface, wherein the second preset area is an area taking the position of the first virtual object as the center, and the second preset area is located in the first preset area.
Optionally, the target skill prop has a function of improving a filling speed of the target shooting prop, and the target shooting prop is used for reducing an attribute value of the hit virtual object;
the apparatus further comprises:
a fourth control module for controlling the first virtual object to hold the target shooting prop in response to a selection operation of the target shooting prop;
and the filling module is used for responding to the filling operation and filling and replenishing the target shooting prop according to a target filling speed, wherein the target filling speed is larger than a default filling speed, and the default filling speed is the speed for filling the target shooting prop when the target skill prop is not equipped.
To sum up, in this embodiment of the present application, virtual skills are added to a first virtual object, when a user controls the first virtual object to trigger target skills, a point of putting a virtual throwing object is directly selected in a virtual environment by using a point of putting marker prop capable of emitting virtual rays, the user can make a specific position of the point of putting in the virtual environment clear by using the virtual rays to put the virtual throwing object, and the accuracy of putting can be improved, so that when the first virtual object is prevented from being directly put, the virtual throwing object deviates from an ideal point of putting, and the practicality of the virtual throwing object is improved.
In addition, the point of putting is selected in the virtual environment by the point of putting marking prop capable of emitting the virtual rays, so that the interactivity and the interestingness of putting operation are improved compared with the mode of selecting the point of putting in map controls.
Referring to fig. 16, a block diagram of a terminal 1600 provided in an exemplary embodiment of the present application is shown. The terminal 1600 may be a portable mobile terminal such as: smart phones, tablet computers, dynamic video expert compression standard audio layer 3 (Moving Picture Experts Group Audio Layer III, MP 3) players, dynamic video expert compression standard audio layer 4 (Moving Picture Experts Group Audio Layer IV, MP 4) players. Terminal 1600 may also be referred to as a user device, portable terminal, or the like.
In general, terminal 1600 includes: a processor 1601, and a memory 1602.
Processor 1601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1601 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 1601 may also include a host processor, which is a processor for processing data in an awake state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1601 may be integrated with an image processor (Graphics Processing Unit, GPU) for use in connection with rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1601 may also include an artificial intelligence (Artificial Intelligence, AI) processor for processing computing operations related to machine learning.
Memory 1602 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 1602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1602 is used to store at least one instruction for execution by processor 1601 to implement a method provided by an embodiment of the present application.
In some embodiments, terminal 1600 may also optionally include: a peripheral interface 1603, and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1604, a touch display screen 1605, a camera 1606, audio circuitry 1607, a positioning component 1608, and a power supply 1609.
Peripheral interface 1603 may be used to connect Input/Output (I/O) related at least one peripheral to processor 1601 and memory 1602. In some embodiments, the processor 1601, memory 1602, and peripheral interface 1603 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1601, memory 1602, and peripheral interface 1603 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1604 is used for receiving and transmitting Radio Frequency (RF) signals, also known as electromagnetic signals. The radio frequency circuit 1604 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1604 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1604 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or wireless fidelity (Wireless Fidelity, wiFi) networks. In some embodiments, the radio frequency circuit 1604 may also include near field communication (Near Field Communication, NFC) related circuits, which are not limited in this application.
The touch display screen 1605 is used to display a UI. The UI may include graphics, text, icons, video, and any combination thereof. The touch display screen 1605 also has the ability to collect touch signals at or above the surface of the touch display screen 1605. The touch signal may be input to the processor 1601 as a control signal for processing. The touch display 1605 is used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards. In some embodiments, the touch display 1605 may be one, providing a front panel of the terminal 1600; in other embodiments, the touch display 1605 may be at least two, disposed on different surfaces of the terminal 1600 or in a folded configuration; in still other embodiments, the touch display 1605 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1600. Even further, the touch display screen 1605 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The touch display 1605 may be made of a material such as a liquid crystal display (Liquid Crystal Display, LCD) or an Organic Light-Emitting Diode (OLED).
The camera assembly 1606 is used to capture images or video. Optionally, camera assembly 1606 includes a front camera and a rear camera. In general, a front camera is used for realizing video call or self-photographing, and a rear camera is used for realizing photographing of pictures or videos. In some embodiments, the at least two rear cameras are any one of the main camera, the depth camera and the wide-angle camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a Virtual Reality (VR) shooting function. In some embodiments, camera assembly 1606 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 1607 is used to provide an audio interface between a user and terminal 1600. Audio circuitry 1607 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1601 for processing, or inputting the electric signals to the radio frequency circuit 1604 for voice communication. The microphone may be provided in a plurality of different locations of the terminal 1600 for stereo acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1601 or the radio frequency circuit 1604 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 1607 may also include a headphone jack.
The location component 1608 is used to locate the current geographic location of the terminal 1600 to enable navigation or location based services (Location Based Service, LBS). The positioning component 1608 may be a positioning component based on the U.S. global positioning system (Global Positioning System, GPS), the chinese beidou system, or the russian galileo system.
A power supply 1609 is used to power the various components in the terminal 1600. The power supply 1609 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1600 also includes one or more sensors 1610. The one or more sensors 1610 include, but are not limited to: acceleration sensor 1611, gyroscope sensor 1612, pressure sensor 1613, fingerprint sensor 1614, optical sensor 1615, and proximity sensor 1616.
The acceleration sensor 1611 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1600. For example, the acceleration sensor 1611 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1601 may control the touch display screen 1605 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1611. The acceleration sensor 1611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1612 may detect a body direction and a rotation angle of the terminal 1600, and the gyro sensor 1612 may collect 3D actions of the user on the terminal 1600 in cooperation with the acceleration sensor 1611. The processor 1601 may implement the following functions based on the data collected by the gyro sensor 1612: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1613 may be disposed on a side frame of terminal 1600 and/or on an underlying layer of touch display 1605. When the pressure sensor 1613 is provided at a side frame of the terminal 1600, a grip signal of the terminal 1600 by a user may be detected, and left-right hand recognition or shortcut operation may be performed according to the grip signal. When the pressure sensor 1613 is disposed at the lower layer of the touch display screen 1605, the control of the operability control on the UI interface can be realized according to the pressure operation of the user on the touch display screen 1605. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1614 is used to collect a fingerprint of a user to identify the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1601 authorizes the user to perform related sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1614 may be disposed on the front, back, or side of the terminal 1600. When a physical key or vendor Logo (Logo) is provided on terminal 1600, fingerprint sensor 1614 may be integrated with the physical key or vendor Logo.
The optical sensor 1615 is used to collect ambient light intensity. In one embodiment, the processor 1601 may control the display brightness of the touch display 1605 based on the ambient light intensity collected by the optical sensor 1615. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 1605 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 1605 is turned down. In another embodiment, the processor 1601 may also dynamically adjust the capture parameters of the camera module 1606 based on the ambient light intensity collected by the optical sensor 1615.
A proximity sensor 1616, also referred to as a distance sensor, is typically disposed on the front side of the terminal 1600. The proximity sensor 1616 is used to collect a distance between a user and the front surface of the terminal 1600. In one embodiment, when the proximity sensor 1616 detects that the distance between the user and the front face of the terminal 1600 gradually decreases, the processor 1601 controls the touch display 1605 to switch from the bright screen state to the off screen state; when the proximity sensor 1616 detects that the distance between the user and the front surface of the terminal 1600 gradually increases, the processor 1601 controls the touch display 1605 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 12 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
Embodiments of the present application also provide a computer readable storage medium storing at least one instruction that is loaded and executed by a processor to implement the method for throwing a virtual projectile in a virtual environment as described in the foregoing embodiments.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal performs the method of throwing a virtual throwing object in a virtual environment provided in various optional implementations of the above aspect.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium. Computer-readable storage media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.

Claims (13)

1. A method of throwing a virtual projectile in a virtual environment, the method comprising:
responding to the triggering operation of the target skill control, controlling a first virtual object to hold a drop point marking prop, wherein the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing the attribute value of the virtual object in an action range;
transmitting a virtual ray through the point-of-putting marking prop, wherein the direction of the virtual ray is consistent with the direction of the point-of-putting marking prop;
determining an intersection point as a drop point in response to the distance between the intersection point and the position of the first virtual object being smaller than a first distance threshold, and displaying a drop point identifier at the drop point, wherein the intersection point is the intersection point of the virtual ray and the virtual object in the virtual environment;
in response to the distance between the intersection point and the position of the first virtual object being greater than the first distance threshold, an illegal delivery identifier is displayed at the intersection point;
Receiving a dragging operation of the drop point marking prop, and adjusting the orientation of the drop point marking prop;
updating the position of the drop point in the virtual environment based on the direction of the adjusted drop point marking prop;
responding to a throwing operation, wherein the throwing distance is smaller than a second distance threshold value, determining a first throwing mode as a target throwing mode, and the throwing distance is the distance between the throwing point and the position where the first virtual object is located;
determining a second delivery mode as the target delivery mode in response to the delivery distance being greater than the second distance threshold, wherein the delivery accuracy of the first delivery mode is higher than the delivery accuracy of the second delivery mode;
determining a delivery center point based on the delivery point and the target delivery mode;
and controlling a virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing center point.
2. The method of claim 1, wherein the determining a launch center point based on the launch point and the target launch pattern comprises:
determining a first delivery area based on the delivery point in response to the target delivery mode being the first delivery mode, wherein the first delivery area is an area centered on the delivery point; determining the delivery center point from the first delivery zone;
Responding to the target delivery mode as the second delivery mode, and determining a second delivery area based on the delivery point and the delivery distance, wherein the second delivery area is centered on the delivery point, the second delivery area is larger than the first delivery area, and the area of the second delivery area and the delivery distance are in positive correlation; and determining the putting center point from the second putting region.
3. The method according to claim 1, wherein the method further comprises:
and responding to the triggering operation of the target skill control, and switching the virtual environment picture from the third person viewing angle to the first person viewing angle.
4. The method according to claim 1, wherein the method further comprises:
generating m sub-virtual throws based on the virtual throws in response to the virtual throws being in effect, m being a positive integer;
controlling m sub virtual throwing objects to move along a preset track;
and responding to the collision of the virtual throwing object in the motion process, controlling the virtual throwing object to take effect at a collision point, wherein the virtual throwing object is used for changing the attribute value of the virtual object in the action range.
5. The method of claim 1, wherein the method comprises, in response to a triggering operation of the target skill control, before controlling the first virtual object to hold the point of delivery marking prop:
receiving a selection operation of a target skill prop in at least one skill prop, and equipping the first virtual object with the target skill prop, wherein the target skill prop is used for adding target skills for the virtual object;
and displaying the target skill control corresponding to the target skill through a virtual environment interface.
6. The method of claim 5, wherein the displaying the target skill control corresponding to the target skill through a virtual environment interface comprises:
displaying the target skill control in the non-triggerable state through a virtual environment interface;
switching a display state of the target skill control from the non-triggerable state to a triggerable state in response to reaching a skill cooling duration;
the controlling the first virtual object to hold the drop point marking prop in response to the triggering operation of the target skill control comprises:
and responding to the triggering operation of the target skill control in the triggerable state, and controlling the first virtual object to hold the drop point marking prop.
7. The method of claim 5, wherein the target skill prop has a reconnaissance function for a target virtual object, the target virtual object being a second virtual object in a winged state, the second virtual object belonging to a different camp than the first virtual object;
the receiving operation of selecting a target skill prop of at least one skill prop, after equipping the first virtual object with the target skill prop, the method further comprises:
and responding to the existence of the target virtual object in the preset area, and displaying prompt information in the virtual environment interface.
8. The method of claim 7, wherein the displaying a hint message in a virtual environment interface in response to the target virtual object being present within a preset area comprises:
the method comprises the steps that in response to the target virtual object being located in a first preset area, the position identification of the target virtual object is displayed in a map display control, and the first preset area is an area taking the position of the first virtual object as the center;
and responding to the target virtual object being positioned in a second preset area, displaying a preset special effect on the edge of the virtual environment interface, wherein the second preset area is an area taking the position of the first virtual object as the center, and the second preset area is positioned in the first preset area.
9. The method of claim 5, wherein the target skill prop has a function of increasing a filling speed of a target shooting prop for reducing an attribute value of the hit virtual object;
the method further comprises the steps of:
controlling the first virtual object to hold the target shooting prop in response to a selection operation of the target shooting prop;
in response to a priming operation, priming the target firing prop at a target priming rate, the target priming rate being greater than a default priming rate, the default priming rate being a rate at which the target firing prop is primed without the target skill prop.
10. A virtual projectile throwing apparatus in a virtual environment, the apparatus comprising:
the first control module is used for responding to the triggering operation of the target skill control, controlling the first virtual object to hold a drop point marking prop, wherein the drop point marking prop is used for marking a drop point of a virtual throwing object in a virtual environment, and the virtual throwing object is used for changing the attribute value of the virtual object in the action range;
the first display module is used for emitting virtual rays through the point-of-delivery marking props, and the direction of the virtual rays is consistent with the direction of the point-of-delivery marking props;
The first display module is further configured to determine, in response to a distance between an intersection point and a location where the first virtual object is located being smaller than a first distance threshold, the intersection point as a delivery point, and display a delivery point identifier at the delivery point, where the intersection point is an intersection point of the virtual ray and a virtual object in the virtual environment;
the second display module is used for displaying an illegal delivery identifier at the intersection point in response to the fact that the distance between the intersection point and the position where the first virtual object is located is larger than the first distance threshold value;
the first receiving module is used for receiving the dragging operation of the drop point marking prop and adjusting the direction of the drop point marking prop;
the updating module is used for updating the position of the putting point in the virtual environment based on the direction of the adjusted putting point marking prop;
the delivery module is used for responding to the delivery operation, and determining a first delivery mode as a target delivery mode, wherein the delivery distance is the distance between the delivery point and the position where the first virtual object is located;
the release module is further configured to determine a second release manner as the target release manner in response to the release distance being greater than the second distance threshold, where the release accuracy of the first release manner is higher than the release accuracy of the second release manner;
The delivery module is further used for determining a delivery center point based on the delivery point and the target delivery mode;
the throwing module is further used for controlling the virtual carrier to throw the virtual throwing object into the virtual environment based on the throwing center point.
11. A terminal, the terminal comprising a processor and a memory; the memory stores at least one program loaded and executed by the processor to implement the method of throwing a virtual projectile in a virtual environment according to any one of claims 1 to 9.
12. A computer readable storage medium having stored therein at least one computer program loaded and executed by a processor to implement a method of throwing a virtual projectile in a virtual environment as claimed in any one of claims 1 to 9.
13. A computer program product, characterized in that it comprises computer instructions stored in a computer-readable storage medium, from which a processor reads and executes them to implement the method of throwing a virtual throwing object in a virtual environment according to any one of claims 1 to 9.
CN202110439760.2A 2021-04-23 2021-04-23 Method, terminal and storage medium for throwing virtual throwing object in virtual environment Active CN113041622B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110439760.2A CN113041622B (en) 2021-04-23 2021-04-23 Method, terminal and storage medium for throwing virtual throwing object in virtual environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110439760.2A CN113041622B (en) 2021-04-23 2021-04-23 Method, terminal and storage medium for throwing virtual throwing object in virtual environment

Publications (2)

Publication Number Publication Date
CN113041622A CN113041622A (en) 2021-06-29
CN113041622B true CN113041622B (en) 2023-04-28

Family

ID=76520039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110439760.2A Active CN113041622B (en) 2021-04-23 2021-04-23 Method, terminal and storage medium for throwing virtual throwing object in virtual environment

Country Status (1)

Country Link
CN (1) CN113041622B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113546422A (en) * 2021-07-30 2021-10-26 网易(杭州)网络有限公司 Virtual resource delivery control method and device, computer equipment and storage medium
CN113633972B (en) * 2021-08-31 2023-07-21 腾讯科技(深圳)有限公司 Virtual prop using method, device, terminal and storage medium
CN114385004A (en) * 2021-12-15 2022-04-22 北京五八信息技术有限公司 Interaction method and device based on augmented reality, electronic equipment and readable medium
CN116688499A (en) * 2022-02-28 2023-09-05 腾讯科技(成都)有限公司 Virtual object control method, device, equipment and medium
WO2024037559A1 (en) * 2022-08-18 2024-02-22 北京字跳网络技术有限公司 Information interaction method and apparatus, and human-computer interaction method and apparatus, and electronic device and storage medium

Also Published As

Publication number Publication date
CN113041622A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN110448891B (en) Method, device and storage medium for controlling virtual object to operate remote virtual prop
CN113041622B (en) Method, terminal and storage medium for throwing virtual throwing object in virtual environment
KR102619439B1 (en) Methods and related devices for controlling virtual objects
CN110917619B (en) Interactive property control method, device, terminal and storage medium
CN112076467B (en) Method, device, terminal and medium for controlling virtual object to use virtual prop
CN111589124B (en) Virtual object control method, device, terminal and storage medium
CN110585710B (en) Interactive property control method, device, terminal and storage medium
CN110538459A (en) Method, apparatus, device and medium for throwing virtual explosives in virtual environment
CN112870715B (en) Virtual item putting method, device, terminal and storage medium
WO2021203856A1 (en) Data synchronization method and apparatus, terminal, server, and storage medium
CN112316421B (en) Equipment method, device, terminal and storage medium of virtual item
CN110507990B (en) Interaction method, device, terminal and storage medium based on virtual aircraft
WO2021147496A1 (en) Method and apparatus for using virtual prop, and device and storage meduim
CN113713382B (en) Virtual prop control method and device, computer equipment and storage medium
CN111330274B (en) Virtual object control method, device, equipment and storage medium
CN111714893A (en) Method, device, terminal and storage medium for controlling virtual object to recover attribute value
CN111589150A (en) Control method and device of virtual prop, electronic equipment and storage medium
CN112138384A (en) Using method, device, terminal and storage medium of virtual throwing prop
CN111744186A (en) Virtual object control method, device, equipment and storage medium
CN111001159A (en) Virtual item control method, device, equipment and storage medium in virtual scene
CN111921190B (en) Prop equipment method, device, terminal and storage medium for virtual object
CN112057857A (en) Interactive property processing method, device, terminal and storage medium
CN112933601A (en) Virtual throwing object operation method, device, equipment and medium
CN112717410A (en) Virtual object control method and device, computer equipment and storage medium
CN113713383A (en) Throwing prop control method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40045971

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant