CN110694273A - Method, device, terminal and storage medium for controlling virtual object to use prop - Google Patents

Method, device, terminal and storage medium for controlling virtual object to use prop Download PDF

Info

Publication number
CN110694273A
CN110694273A CN201910995534.5A CN201910995534A CN110694273A CN 110694273 A CN110694273 A CN 110694273A CN 201910995534 A CN201910995534 A CN 201910995534A CN 110694273 A CN110694273 A CN 110694273A
Authority
CN
China
Prior art keywords
prop
virtual
placement
displaying
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910995534.5A
Other languages
Chinese (zh)
Inventor
刘智洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910995534.5A priority Critical patent/CN110694273A/en
Publication of CN110694273A publication Critical patent/CN110694273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/533Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • A63F13/537Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/303Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device for displaying additional data, e.g. simulating a Head Up Display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, a terminal and a storage medium for controlling a virtual object to use a prop, and relates to the field of computers. The method comprises the following steps: displaying a user interface, wherein the user interface comprises a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing the virtual environment from the visual angle of a virtual object; when a first trigger operation on a prop placing control is received, displaying a prop placing range in a virtual environment picture; when a second trigger operation for the prop placement control is received, displaying the virtual prop at a target position indicated by the second trigger operation, wherein the target position is located in a prop placement range; and if the virtual prop is triggered by the target object, reducing the life value of the target object and reducing the moving speed of the target object. The virtual props are placed in advance by controlling the virtual objects, so that preset damage to other virtual objects by the virtual props is realized, and the attack mode of the game can be enriched.

Description

Method, device, terminal and storage medium for controlling virtual object to use prop
Technical Field
The present application relates to the field of computers, and in particular, to a method, an apparatus, a terminal, and a storage medium for controlling a virtual object to use a property.
Background
A First-Person shooter (FPS) game is an application program based on a three-dimensional virtual environment, and a player can control a virtual object in the virtual environment to perform actions such as walking, running, climbing, Shooting and the like, and a plurality of players can form a team on line to cooperatively complete a certain task in the same virtual environment.
In the game process, the virtual object can use the virtual prop (such as a gun), correspondingly, the player can control the virtual object to use the virtual prop to attack other virtual objects, so as to damage other virtual objects, and the other virtual objects can be virtual objects controlled by other players or Artificial Intelligence (AI) virtual objects controlled by non-human.
However, in the related art, the player can only control the virtual object to use the virtual item to attack other virtual objects in real time, and the attack mode is single, so that the reality of the FPS game is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for controlling a virtual object to use a prop, and can solve the problems that in the related art, a player can only control the virtual object to use the virtual prop to attack other virtual objects in real time, and the attack mode is single, so that the authenticity of an FPS game is poor. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a method for controlling a virtual object to use a prop, where the method includes:
displaying a user interface, wherein the user interface comprises a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing a virtual environment from the visual angle of a virtual object;
when a first trigger operation on the prop placement control is received, displaying a prop placement range in the virtual environment picture;
when a second trigger operation on the prop placement control is received, displaying a virtual prop at a target position indicated by the second trigger operation, wherein the target position is located in the prop placement range;
and if the virtual prop is triggered by a target object, reducing the life value of the target object and reducing the moving speed of the target object.
In another aspect, an embodiment of the present application provides an apparatus for controlling a virtual object to use a prop, where the apparatus includes:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a user interface, the user interface comprises a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing a virtual environment from a visual angle of a virtual object;
the second display module is used for displaying a prop placing range in the virtual environment picture when receiving a first trigger operation on the prop placing control;
the third display module is used for displaying the virtual prop at a target position indicated by second trigger operation when the second trigger operation on the prop placing control is received, wherein the target position is located in the prop placing range;
and the first control module is used for reducing the life value of the target object and reducing the moving speed of the target object if the virtual prop is triggered by the target object.
On the other hand, an embodiment of the present application provides a terminal, where the terminal includes: a processor and a memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a method of controlling the use of props by virtual objects as described in the above aspect.
In another aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the method of controlling the use of a prop by a virtual object as described in the above aspect.
In another aspect, a computer program product is provided, which when run on a computer causes the computer to perform the method of controlling the use of props by virtual objects as described in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method comprises the steps that a virtual environment picture and a prop placing control are displayed on a user interface, when a first trigger operation for the prop placing control is received, a prop placing range is displayed in the virtual environment picture, and when a second trigger operation for the prop placing control is received, a virtual prop is displayed at a target position indicated by the second trigger operation, so that when the virtual prop is triggered by a target object, the life value of the target object is reduced, and the moving speed of the target object is reduced. The virtual property is placed in advance by controlling the virtual object, the target object is subjected to preset damage by the virtual property, and compared with a mode that the virtual object can only be controlled to use the virtual property to attack other virtual objects in real time in the related art, the mode that the virtual object is placed by controlling the virtual object in advance is beneficial to enriching the attack mode of the game, so that the reality of the game is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a diagram illustrating an interface for controlling a process of using props by a virtual object in the related art;
FIG. 2 is a schematic diagram of an interface for controlling a process of controlling a virtual object to use a prop according to an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 4 shows a flowchart of a method for controlling a virtual object to use a prop provided by an exemplary embodiment of the present application;
FIG. 5 is a schematic interface diagram of a process implemented by the embodiment shown in FIG. 4;
FIG. 6 shows a flowchart of a method for controlling a virtual object to use a prop provided by another example embodiment of the present application;
FIG. 7 is a schematic diagram of an implementation of a switching camera model;
FIG. 8 is a schematic interface diagram of a process for determining a target location;
FIG. 9 is a schematic diagram of a prop model and collision detector of a virtual prop;
FIG. 10 is a schematic diagram of the trigger range of a virtual prop and the effective range of the prop effect;
FIG. 11 shows a flowchart of a method for controlling a virtual object to use a prop, provided by another example embodiment of the present application;
FIG. 12 is a block diagram of an apparatus for controlling a virtual object to use a prop according to an exemplary embodiment of the present application;
fig. 13 shows a block diagram of a terminal according to an exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms referred to in the embodiments of the present application are described:
virtual environment: is a virtual environment that is displayed (or provided) when an application is run on the terminal. The virtual environment may be a simulation environment of a real world, a semi-simulation semi-fictional environment, or a pure fictional environment. The virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment, which is not limited in this application. The following embodiments are illustrated with the virtual environment being a three-dimensional virtual environment.
Virtual object: refers to a movable object in a virtual environment. The movable object can be a virtual character, a virtual animal, an animation character, etc., such as: characters, animals, plants, oil drums, walls, stones, etc. displayed in a three-dimensional virtual environment. Optionally, the virtual object is a three-dimensional volumetric model created based on animated skeletal techniques. Each virtual object has its own shape and volume in the three-dimensional virtual environment, occupying a portion of the space in the three-dimensional virtual environment.
Virtual props: the virtual channel in the embodiment of the application refers to a prop placed by a virtual object in a virtual environment, and the virtual prop can reduce the life value of the object triggering the virtual prop and reduce the moving speed of the object. For example, the virtual prop may be a virtual trap, a virtual animal trap, a virtual chemical, or the like, and the specific type of the virtual prop is not limited in the embodiments of the present application.
Controlling part is placed to the stage property: the present invention relates to a User Interface (UI) control, and any visual control or element that can be seen on a User Interface of an application program, for example, controls such as a picture, an input box, a text box, a button, and a label, where some UI controls respond to a User operation, for example, the User triggers a prop placement control, and controls a virtual object to place a virtual prop in a virtual environment picture.
The virtual props of "equipping, carrying or assembling" in this application refer to the virtual props that the virtual object owns, and the virtual object owns the knapsack, has the knapsack check in the knapsack, and the virtual props are deposited in the knapsack of virtual object, perhaps, the virtual object is using the virtual props.
The method provided in the present application may be applied to a virtual reality application program, a three-dimensional map program, a military simulation program, a first person shooter game, a Multiplayer Online Battle arena games (MOBA), and the like, and the following embodiments are exemplified by applications in games.
The game based on the virtual environment is often composed of one or more maps of game world, the virtual environment in the game simulates the scene of the real world, the user can control the virtual object in the game to walk, run, jump, shoot, fight, drive, switch to use the virtual prop, use the virtual prop to hurt other virtual objects and other actions in the virtual environment, the interactivity is strong, and a plurality of users can form a team on line to play a competitive game.
A method for controlling a virtual object to use a prop is provided in the related art, and as shown in fig. 1, an interface diagram of a process for controlling a virtual object to operate a remote virtual prop in the related art is shown. Wherein, it shows on the long-range virtual stage property injury interface 100: a movement control 101, a sighting telescope 102 of a remote virtual prop, an injury control 103, and a prop fence 104.
The remote virtual item injury interface 100 is a picture (i.e., a picture in the shouldering mode) when a user controls a virtual object to start a sighting telescope of a remote virtual item and observes a virtual environment at the viewing angle of the virtual object. The movement control 101 is used for controlling the virtual object to move to a certain direction in the virtual environment; the sighting telescope 102 is used for aiming a target object in a virtual environment; the injury control 103 is used for controlling the virtual object to perform injury work; the property bar 104 is used for switching the virtual property currently held by the virtual object.
When the remote virtual prop controls a virtual object to operate the remote virtual prop, a user firstly switches to the remote virtual prop (such as a gun) through the prop fence 104, then adjusts the position of the virtual object through the movement control 101, so that the target object is positioned at the center of sight 102, and the gun type virtual prop is triggered to fire a bullet through triggering operation of the injury control 103.
When the method for controlling the virtual object to use the prop is adopted, a user can only control the virtual object to use the remote virtual prop to damage a single target object at each time, and can hit the target object after accurate aiming is carried out, when the number of the target objects is large, the user needs to change the aiming direction in real time, and only controls the virtual object to use the remote virtual prop to damage the target object, so that a large number of target objects cannot be killed quickly, the operation requirement on the user is high, and the attack mode is single.
An embodiment of the present application provides a method for controlling a virtual object to use a prop, and as shown in fig. 2, an interface diagram for controlling a process of controlling a virtual object to use a prop provided in an exemplary embodiment of the present application is shown.
In a possible implementation manner, when a virtual object is controlled to use a prop, a user first switches to a user interface 201 with a prop placement control 203 through a prop fence, a target object 202 and the prop placement control 203 are displayed in the current user interface 201, when the user clicks the prop placement control 203, the terminal receives a click operation on the prop placement control 203 and switches to a third person-named view angle, a prop placement range 204 is displayed in a virtual environment picture, and a virtual object 205 is located in the prop placement range 204; at this time, the prop placement control 203 is changed into a draggable control, the user can drag the prop placement control 203 to slide along the direction shown by the arrow 206, a candidate placement area 207 of the virtual prop is correspondingly displayed in the prop placement range 204, when the user stops dragging the prop placement control 203, the view is switched to the first-person view, and the virtual prop 208 is displayed at the position corresponding to the candidate placement area 207; when target object 202 triggers virtual prop 208, life value reduction is performed on target object 202, and the moving speed of target object 202 is reduced.
The virtual object is controlled to place the virtual prop, when the target object triggers the virtual prop, the life value of the target object is reduced, the moving speed of the target object is reduced, another mode capable of attacking the target object is provided for a player, and the attack mode of the game is enriched.
Referring to fig. 3, a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. The implementation environment comprises: a first terminal 120, a server 140, and a second terminal 160.
The first terminal 120 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The first terminal 120 is a terminal used by a first user who uses the first terminal 120 to control a first virtual object located in a virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, switching virtual props, using virtual props to injure other virtual objects. Illustratively, the first virtual object is a first virtual character, such as a simulated character object or an animated character object.
The first terminal 120 is connected to the server 140 through a wireless network or a wired network.
The server 140 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. Illustratively, the server 140 includes a processor 144 and a memory 142, the memory 142 including a display module 1421 and a control module 1422. The server 140 is used to provide background services for applications that support a three-dimensional virtual environment. Alternatively, the server 140 undertakes primary computational work and the first and second terminals 120, 160 undertake secondary computational work; alternatively, the server 140 undertakes the secondary computing work and the first terminal 120 and the second terminal 160 undertakes the primary computing work; alternatively, the server 140, the first terminal 120, and the second terminal 160 perform cooperative computing by using a distributed computing architecture.
The second terminal 160 is installed and operated with an application program supporting a virtual environment. The application program can be any one of a virtual reality application program, a three-dimensional map program, a military simulation program, an FPS game, an MOBA game and a multi-player gun battle type survival game. The second terminal 160 is a terminal used by a second user who uses the second terminal 160 to control a second virtual object located in the virtual environment to perform activities including, but not limited to: adjusting at least one of body posture, crawling, walking, running, riding, jumping, driving, shooting, throwing, switching virtual props, using virtual props to injure other virtual objects. Illustratively, the second virtual object is a second virtual character, such as a simulated character object or an animated character object.
Optionally, the first virtual character and the second virtual character are in the same virtual environment. Alternatively, the first avatar and the second avatar may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
Alternatively, the applications installed on the first terminal 120 and the second terminal 160 are the same, or the applications installed on the two terminals are the same type of application of different control system platforms. The first terminal 120 may generally refer to one of a plurality of terminals, and the second terminal 160 may generally refer to one of a plurality of terminals, and this embodiment is only illustrated by the first terminal 120 and the second terminal 160. The device types of the first terminal 120 and the second terminal 160 are the same or different, and include: at least one of a smart phone, a tablet computer, an e-book reader, an MP3(Moving Picture Experts Group Audio Layer III), an MP4(Moving Picture Experts Group Audio Layer IV), a laptop portable computer, and a desktop computer. The following embodiments are illustrated with the terminal comprising a smartphone.
Those skilled in the art will appreciate that the number of terminals described above may be greater or fewer. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Referring to fig. 4, a flowchart of a method for controlling a virtual object to use a prop according to an exemplary embodiment of the present application is shown. The embodiment is described by taking the method as an example for the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or other terminals in the implementation environment, and the method includes the following steps.
Step 401, displaying a user interface, where the user interface includes a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing the virtual environment from the perspective of the virtual object.
The user interface is an interface of an application program supporting a virtual environment, and the user interface comprises a virtual environment picture and a prop placement control. Optionally, when the user obtains a certain virtual item, an item placement control corresponding to the virtual item is displayed on the user interface, and the user can control the virtual object to place the virtual item in the virtual environment picture by triggering the item placement control. Besides, the user interface may further include a virtual item switching control (for switching the virtual item), a movement control (for controlling the virtual object to move in the virtual environment), a life value display component (for displaying a current life value of the virtual object), and the like, which is not limited in this embodiment of the application.
Optionally, the virtual environment picture is a picture for observing the virtual environment from the perspective of the virtual object. The perspective refers to an observation angle when observing in the virtual environment at a first person perspective or a third person perspective of the virtual object. Optionally, in an embodiment of the present application, the viewing angle is an angle when a virtual object is observed by a camera model in a virtual environment.
Optionally, the camera model automatically follows the virtual object in the virtual environment, that is, when the position of the virtual object in the virtual environment changes, the camera model changes while following the position of the virtual object in the virtual environment, and the camera model is always within the preset distance range of the virtual object in the virtual environment. Optionally, in the automatic following process, the relative positions of the camera model and the virtual object are not changed.
The camera model refers to a three-dimensional model located around a virtual object in a virtual environment, and when a first-person perspective is adopted, the camera model is located near or at the head of the virtual object; when the third person perspective is adopted, the camera model may be located behind and bound to the virtual object, or may be located at any position away from the virtual object by a preset distance, and the virtual object located in the virtual environment may be observed from different angles by the camera model, and optionally, when the third person perspective is the over-shoulder perspective of the first person, the camera model is located behind the virtual object (for example, the head and the shoulder of the virtual character). Optionally, the viewing angle includes other viewing angles, such as a top viewing angle, in addition to the first person viewing angle and the third person viewing angle; the camera model may be located overhead of the virtual object head when a top view is employed, which is a view of viewing the virtual environment from an overhead top view. Optionally, the camera model is not actually displayed in the virtual environment, i.e. the camera model is not displayed in the virtual environment displayed by the user interface.
Taking the camera model as an example, which is located at an arbitrary position away from the virtual object by a preset distance, optionally, one virtual object corresponds to one camera model, and the camera model can rotate around the virtual object as a rotation center, for example: the camera model is rotated with any point of the virtual object as a rotation center, the camera model not only rotates in angle but also shifts in displacement during the rotation, and the distance between the camera model and the rotation center is kept constant during the rotation, that is, the camera model is rotated on the surface of a sphere with the rotation center as a sphere center, wherein any point of the virtual object may be a head, a trunk or any point around the virtual object, which is not limited in the embodiment of the present application. Optionally, when the camera model observes the virtual object, the center of the view angle of the camera model points in a direction in which a point of the spherical surface on which the camera model is located points at the center of the sphere.
Optionally, the camera model may also observe the virtual object at a preset angle in different directions of the virtual object.
Optionally, the virtual environment displayed by the virtual environment screen includes: at least one element selected from the group consisting of mountains, flat ground, rivers, lakes, oceans, deserts, sky, plants, buildings, and vehicles.
Illustratively, as shown in fig. 5, a target object 502, a life value control 503, a prop placement control 504, and a movement control 505 are displayed in the user interface 501. Wherein, the user can click on item placement control 504 to place the virtual item, and can control the virtual object to move in the virtual environment by clicking on move control 504, and life value control 503 represents the current life value of target object 502.
Step 402, when a first trigger operation on a prop placement control is received, displaying a prop placement range in a virtual environment picture.
The item placement range indicates that the user can only control the virtual object to place the virtual item in the item placement range. For example, the prop placement range is a circular area range with a virtual object as a center and a preset length as a radius. Illustratively, if the preset length is 5m, the user can only control the virtual object to place the virtual item within a circular area with a radius of 5 m.
Optionally, the first trigger operation may be at least one of a single-click operation, a double-click operation, a long-press operation, or a pressing operation on the property placement control, which is not limited in this application.
In a possible implementation manner, a property placement control is displayed on a user interface, when a user clicks the property placement control, the terminal receives a first trigger operation on the property placement control, and displays a property placement range in a virtual environment picture so as to indicate the user to place a virtual property in the property placement range.
Illustratively, as shown in fig. 5, when the user clicks the property placement control 504, and the terminal receives a trigger operation on the property placement control 504, the terminal switches to the third person name viewing angle, and displays a property placement range 506 in the virtual environment screen, at this time, a virtual object 507 is displayed in the user interface 501, and the property placement control 504 changes into a draggable control.
And step 403, when a second trigger operation on the prop placement control is received, displaying the virtual prop at a target position indicated by the second trigger operation, wherein the target position is located in the prop placement range.
The second trigger operation may be any one of a dragging operation and a sliding operation of the prop placement control.
In a possible implementation manner, when the user interface displays a property placement range, the property placement control may enter a draggable mode, and in the draggable mode, the user may select a target location within the property placement range to place the virtual property through a dragging operation on the property control. Namely, when the terminal receives a second trigger operation on the prop placement control, the virtual prop is displayed at a target position indicated by the second trigger operation.
Illustratively, as shown in fig. 5, when a user drags the prop placement control 504 to slide along a direction indicated by an arrow 508, a prop placement area 509 corresponding to a current operation is displayed in the prop placement range 506, and if the user stops triggering the prop placement control 504 at this time, the terminal determines the prop placement area 509 as a target position, displays a virtual prop 510 at the target position 509, and switches back to the first-person viewing angle.
Optionally, the above illustration shows only one display form of the virtual item, that is, a cuboid, in other possible embodiments, the display form of the virtual item may also be a cube, a sphere, or the like, and the display form of the virtual item is not limited in this embodiment.
Optionally, the user may repeatedly place the virtual item in the virtual environment according to the manners of step 402 and step 403, and the number of times of placing the virtual item is not limited in this embodiment of the application.
Optionally, there is a cooling time between two consecutive times of placing the same virtual item, that is, in the cooling time, the user cannot continue to place the same virtual item. Wherein the cooling time is preset by the developer. For example, the cooling time is 5s, that is, the time interval between two consecutive times of placing the virtual item by the user is at least 5 s.
Optionally, the user may select to automatically place the virtual item. Illustratively, when a user clicks an automatic control on a user interface, the terminal receives a trigger operation on the automatic control, that is, a virtual prop is automatically placed at a target position.
Step 404, if the virtual item is triggered by the target object, reducing the life value of the target object and reducing the moving speed of the target object.
The target object may be an AI virtual object that is not manually controlled, or may be a virtual object that is controlled by another player.
The virtual prop is provided with a triggering range, and the triggering range is larger than or equal to the size of the virtual prop. When the target object is within the triggering range, the virtual prop is triggered.
In a possible implementation manner, after the virtual item is displayed on the virtual environment screen, if the target object triggers the virtual item, the life value of the target object is reduced, and the moving speed is reduced. The ratio of the life value reduction and the moving speed reduction is preset by a developer, for example, the life value of the target object is reduced by 20%, and the speed is reduced by 10%.
Illustratively, as shown in fig. 5, when target object 502 triggers virtual item 510, that is, target object 502 enters triggering range 511 of virtual item 510, the life value of the target object is reduced, as shown by life value control 512, the blank part is the reduced life value, and the moving speed of target object 502 is reduced.
To sum up, in this embodiment of the application, a virtual environment picture and a prop placing control are displayed on a user interface, when a first trigger operation on the prop placing control is received, a prop placing range is displayed in the virtual environment picture, and when a second trigger operation on the prop placing control is received, a virtual prop is displayed at a target position indicated by the second trigger operation, so that when the virtual prop is triggered by a target object, a life value of the target object is reduced, and a moving speed of the target object is reduced. The virtual props are placed in advance through controlling the virtual objects, the target objects are subjected to preset damage through the virtual props, compared with a mode that the virtual objects can only be controlled to attack other virtual objects in real time through the virtual props in the related technology, the mode that the target objects are damaged through the virtual props placed in advance is beneficial to enriching the attack mode of the game, and therefore the reality of the game is improved.
Referring to fig. 6, a flowchart of a method for controlling a virtual object to use a prop according to another exemplary embodiment of the present application is shown. The embodiment is described by taking the method as an example for the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or other terminals in the implementation environment, and the method includes the following steps.
Step 601, displaying a user interface, where the user interface includes a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing the virtual environment from the view angle of the virtual object.
The step 401 may be referred to in the implementation manner of this step, and this embodiment is not described herein again.
Step 602, when a first trigger operation on a prop placement control is received, acquiring a first position of a virtual object in a virtual environment.
Because the placement position of the virtual prop has a certain placement range and is related to the position of the current virtual object, before the placement range of the prop is displayed, the position of the current virtual object needs to be acquired, and then the placement range of the prop is determined.
In one possible implementation, when a first trigger operation on the prop placement control is received, a first position where the current virtual object is located in the virtual environment is obtained, wherein the first position may be a three-dimensional coordinate of the virtual object in the virtual environment, for example, (x) is0,y0,z0)。
Optionally, since the prop placement range is a two-dimensional plane diagram, the first position may also be represented by two-dimensional coordinates of a virtual object, for example, (x)0,y0)。
And 603, controlling the camera model in the virtual environment to move from the first shooting position to a second shooting position, wherein when the camera model is located at the second shooting position, the regions of the virtual object in all directions in the virtual environment picture are visible.
In general, when the camera model is located at a first shooting position (for example, a shooting position corresponding to a first-person perspective), only a front area of the virtual object is visible in the virtual environment picture.
Because the stage property placing range is a closed range enclosing the first position peripheral side, if the camera model is still located at the first shooting position, the user can only see a part of the front part of the stage property placing range, and therefore the shooting position of the camera model needs to be adjusted, so that the whole stage property placing range can be seen, and the user can select the placing position of the virtual stage property in all directions.
In a possible implementation manner, after the terminal receives a first trigger operation on the prop placement control, the camera model in the virtual environment can be controlled to move from the first shooting position to the second shooting position, where when the camera model is located at the second shooting position (for example, a shooting position corresponding to a third person called viewing angle), regions in all directions of the virtual object in the virtual environment picture are visible, so that the prop placement range is completely visible.
Schematically, as shown in fig. 7, a point is determined in the virtual object 11 as a rotation center 12, and the camera model rotates around the rotation center 12, and optionally, the camera model is configured with an initial position 13, i.e., a first shooting position, which is a position above and behind the virtual character (e.g., a rear position of the brain). After the terminal receives a first trigger operation on the prop placement control, the first shooting position 13 is rotated to the second shooting position 14, the direction of the view angle of the camera model changes along with the rotation of the camera model, and when the camera model is located at the second shooting position 14, the regions of all directions of the virtual object in the virtual environment picture are visible.
And step 604, displaying a prop placement range map in the virtual environment picture according to the first position, wherein the prop placement range map is used for representing a prop placement range.
The property placement range is a closed range enclosing the periphery of the first position, and can be a closed range in any shape, and the chartlet representing the property placement range can be a closed graph in any shape, such as a circle, an ellipse, a square and the like. The shape of the paste map of the placement range of the prop is not limited in the embodiment.
In a possible implementation manner, after the terminal acquires the first position of the virtual object, a preset item placement range map may be displayed in the virtual environment screen according to the first position. For example, if the property placement range map is a circle, the circle map may be displayed in the virtual environment screen with the first position as the center of the circle.
Because the placing times of the virtual props are not limited, and the virtual props have a certain triggering range, if the virtual props have been placed in the current prop placing range, in order to avoid repeated placement of the virtual props in the triggering range of the existing virtual props and cause resource waste, the triggering range in which the virtual props have been placed can be displayed in the prop placing range, so that a user can place other virtual props in an area outside the triggering range.
In a possible implementation manner, when the terminal displays the property placement range map according to the acquired first position, whether the placed virtual property is included in the current property placement range is detected, and if the placed virtual property is included in the property placement range, the trigger range of the virtual property is displayed at the same time, so that a user can place other virtual properties in a region outside the trigger range, overlapping of the trigger ranges of the virtual properties is avoided, and damage to a target object in a larger range is realized.
Step 605, receiving a dragging operation of the prop placement control.
In a possible implementation manner, when the terminal displays the property placement range map, the property placement control can enter a dragging mode, in the dragging mode, a user can drag the property placement control and can drag the property placement control to any direction, and then the terminal receives a dragging operation on the property placement control.
And 606, adjusting the position of the candidate placement area in the prop placement range according to the dragging operation.
In a possible implementation manner, when the terminal receives a dragging operation on the prop dropping control, a candidate dropping area corresponding to the dragging operation may be displayed in the prop dropping range.
Illustratively, as shown in fig. 8, when the user drags the prop drop control 801 to move along the direction indicated by the arrow 802, the candidate drop area 804 corresponding to the dragging operation is displayed in the prop drop range 803, and when the user continues to drag the prop drop control 801 to move along the direction indicated by the arrow 805, the candidate drop area 806 corresponding to the dragging operation is displayed in the prop drop range 803. By dragging the prop placement control 801, the change of the candidate placement area can be realized within the prop placement range.
Step 607, when the touch operation on the prop placement control is stopped, it is determined that the second trigger operation is received, and the current position of the candidate placement area is determined as the target position.
In a possible implementation manner, when the user stops dragging the prop placement control, and the terminal detects that the touch operation on the prop placement control is stopped, it is determined that the second trigger operation is received, and the current position of the candidate placement area is determined as the target position.
Illustratively, as shown in fig. 8, when the user stops dragging prop drop control 801, a candidate drop area 806 corresponding to the dragging operation at this time is determined as a target position, that is, virtual prop 807 is displayed at the candidate drop area 806.
Step 608, displaying the virtual item at the target location.
In a possible implementation manner, after the terminal determines the target position, whether the distance between the target position and the first position meets the prop placing range is judged according to the preset prop placing range, and if yes, the virtual prop is displayed at the target position. And if not, displaying prompt information to prompt the user to reselect the target position.
Wherein, the distance between the target position and the first position can be determined according to the coordinates corresponding to the target position and the first position respectively. The formula used may be:
Figure BDA0002239599100000141
wherein d is the distance between the target position and the first position, (x)0,y0) Is the coordinate corresponding to the first position, (x)1,y1) The coordinates corresponding to the target position.
The function of the virtual prop is that when the target object triggers the virtual prop, the target object is damaged, so that when a developer designs the virtual prop, a collision detector is arranged on a model of the virtual prop so as to detect whether the target object triggers the virtual prop.
In one possible embodiment, the implementation process of displaying the virtual object at the target position may include the following steps:
firstly, displaying a prop model corresponding to the virtual prop at a target position.
In one possible implementation manner, after the terminal determines the target position of the virtual item, an item model corresponding to the virtual item may be displayed at the target position. The shape of the prop model can be regular cubes such as a cube, a cylinder and a sphere, or irregular cubes, and the shape of the prop model of the virtual prop is not limited in the embodiment.
Because displaying the prop model corresponding to the virtual prop at the target position cannot be completed instantly, a certain time is required, and in the process of generating the prop model, the virtual prop does not hurt the target object within the trigger range, that is, in the process of generating the prop model corresponding to the virtual prop, the virtual prop does not take effect, therefore, two states of generating the prop model and generating the prop model need to be represented in different forms, so that the user can determine whether the virtual prop takes effect.
In one possible implementation, the following two steps may be taken to implement the determination process of whether a virtual item is in effect.
1. And displaying the setting animation corresponding to the virtual prop at the target position.
The set animation is stored by being associated with the virtual prop in advance by a developer, and the time length of the set animation corresponds to the time required for generating the prop model corresponding to the virtual prop, namely the time required for enabling the virtual prop to take effect. For example, the animation time period is set to 2 s.
In a possible implementation manner, after the terminal determines the target position of the virtual item, a setting animation corresponding to the virtual item may be displayed at the target position.
2. And when the animation display is set, displaying the prop model at the target position, wherein the virtual prop does not take effect in the animation display setting process.
In a possible implementation mode, after the animation display is set, the prop model is displayed at the target position, namely, the virtual prop is shown to be immediately effective. In the process of displaying the setting animation, the virtual prop is not effective, namely, in the process of displaying the setting animation, even if the target object is within the triggering range of the virtual prop, the target object cannot be damaged.
And secondly, setting a collision detector corresponding to the prop model at the target position, wherein the collision detector is used for determining that the virtual prop is triggered when collision is detected.
Wherein the collision detector is invisible, and the size of the collision detector is larger than or equal to that of the prop model. When a target object is located in the detection range of the collision detector, that is, the collision detector detects that a collision occurs, and determines that the virtual prop is triggered, at this time, a corresponding logic code is triggered.
In a possible implementation manner, a developer presets a trigger logic corresponding to a collision detector, provides a corresponding calling interface, and sets the collision detector for a prop model corresponding to a virtual prop in the process of displaying the virtual prop at a target position, so as to determine that the virtual prop is triggered when a collision is detected, and then calls the corresponding interface, calls the trigger logic corresponding to the collision detector, and further reduces the life value of a target object and the moving speed according to the trigger logic.
Schematically, as shown in fig. 9, a prop model 901 corresponding to a virtual prop is provided with a spherical collision detector 902, and the size of the spherical collision detector is larger than that of the prop model 901. During actual display, the collision detector 902 is not visible.
And step 609, controlling the camera model in the virtual environment to move from the second shooting position to the first shooting position.
In a possible implementation manner, after the user finishes placing the virtual prop, the user may not need to see the area of each direction of the virtual object in the virtual environment picture, and at this time, the terminal automatically switches the shooting position of the camera model, namely, controls the camera model to move from the second shooting position to the first shooting position.
Illustratively, as shown in fig. 7, when the terminal receives an instruction to switch the shooting position of the camera model, the camera model is controlled to be converted from the second shooting position 14 to the first shooting position 13, and the viewing angle direction of the camera model changes along with the rotation of the camera model.
Step 610, if the virtual item is triggered by the target object, determining that the virtual item corresponds to each object in the item effect effective range, wherein the item effect effective range is greater than or equal to the triggering range of the virtual item.
The virtual prop has a prop effect effective range, and the prop effect effective range is greater than or equal to a trigger range of the virtual prop. When the effective range of the prop effect is equal to the trigger range, after the target object triggers the virtual prop, the life value of the target object is reduced, and the moving speed of the target object is reduced. Optionally, when the effective range of the property effect of the virtual property is greater than the trigger range, and when the virtual property is triggered by a certain target object, even if other target objects do not trigger the virtual property, the virtual property is located in the effective range of the property effect corresponding to the virtual property, and corresponding damage is also caused to the other target objects.
In a possible implementation manner, when the virtual prop is triggered by the target object, it may be determined that the virtual prop corresponds to each object within the effective range of the prop effect, so as to cause corresponding damage to each object.
Step 611, the life value of each object in the effective range of the prop effect is reduced, and the moving speed of each object in the effective range of the prop effect is reduced.
In a possible implementation manner, when the virtual item is triggered by a target object and the terminal determines that the virtual item corresponds to each object within the effective range of item effect, the life value of each object is reduced, and the moving speed of each object is reduced.
Schematically, as shown in fig. 10, when the target object 1001 is located in the trigger range 1005 corresponding to the virtual item 1004, that is, the target object 1001 triggers the virtual item 1004, and at this time, it is obtained that the item effect effective range 1006 corresponding to the virtual item 1004 further includes the target object 1002 and the target object 1003, the life values of the target object 1001, the target object 1002, and the target object 1003 are reduced, and the moving speeds of the target object 1001, the target object 1002, and the target object 1003 are reduced.
In the above embodiment, when the first trigger operation on the prop placement control is received, the first position of the virtual object in the virtual environment is obtained, the placement range of the virtual prop can be displayed according to the first position, and the user is prevented from performing invalid placement operation outside the placement range of the virtual prop.
Meanwhile, in the embodiment, the camera model is controlled to be switched from the first shooting position to the second shooting position, so that the regions of the virtual object in all directions in the virtual environment picture are visible, and the user can place the virtual prop in all directions.
In addition, in the process of displaying the virtual prop, the preset animation is displayed, and when the animation display is set, the virtual prop is displayed at the target position, so that a user can easily distinguish whether the virtual prop takes effect.
The above embodiments describe the placement and triggering process of the virtual item, but the same virtual item cannot be triggered without limitation, so that in one possible implementation, when the virtual item meets the disappearance condition, the display of the virtual item at the target position is stopped. Wherein the disappearance condition includes at least one of: the placing time of the virtual prop reaches a time threshold or the triggering times of the virtual prop reach a time threshold.
Illustratively, when the terminal detects that the placing time length of the virtual prop reaches a time length threshold value, the display of the virtual prop at the target position is stopped; or when the terminal detects that the triggering frequency of the virtual prop reaches a frequency threshold value, the display of the virtual prop at the target position is stopped. For example, the time length threshold is 3min, that is, when the terminal detects that the placement time length of the virtual item exceeds 3min, the display of the virtual item at the target position is stopped; the time threshold is 3 times, that is, when the terminal detects that the number of times that the virtual item is triggered by the target object exceeds 3 times, the display of the virtual item at the target position is stopped.
Optionally, the time length threshold or the time threshold is preset by a developer.
Optionally, even if the placement duration of the virtual item does not reach the duration threshold, the number of times of triggering the virtual item reaches the number threshold, and the virtual item is stopped from being displayed at the target position.
In this embodiment, by setting a time length threshold corresponding to the placing time length of the virtual prop, or a time threshold corresponding to the triggering time length, when the terminal detects that the virtual prop meets the time length threshold or the time threshold, the display of the virtual prop at the target position may be stopped, and the flexibility in use of the virtual prop may be improved.
With reference to the foregoing embodiments, in an illustrative example, a flow for controlling a virtual object to use a prop is shown in fig. 11.
Step 1101, receiving a first trigger operation on a prop placing control.
Step 1102, the prop placement control is changed to a draggable icon.
Step 1103, detecting whether the user clicks within the prop placement range.
If yes, determining the click position, and executing step 1104; if the click position is not in the item placement range, go back to step 1102.
And step 1104, acquiring the target position and starting to generate the virtual prop.
Step 1105, detecting whether the virtual item is generated completely.
If yes, go to step 1106; if the virtual item is not generated, go back to step 1104.
Step 1106, displaying the virtual prop at the target position.
Step 1107, detect whether there is a target object to trigger the virtual item.
If yes, go to step 1108, otherwise, go to step 1109.
Step 1108, the life value of the target object is reduced, and the moving speed of the target object is reduced.
And step 1109, detecting whether the virtual prop reaches a disappearance condition.
If yes, go to step 1110, and if no vanishing condition is reached, go back to step 1108.
Step 1110, the virtual item disappears.
Fig. 12 is a block diagram illustrating a structure of an apparatus for controlling a virtual object to use a prop according to an exemplary embodiment of the present application, where the apparatus may be disposed at the first terminal 120 or the second terminal 160 in the implementation environment shown in fig. 3 or at another terminal in the implementation environment, and the apparatus includes:
a first display module 1201, configured to display a user interface, where the user interface includes a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing a virtual environment from a perspective of a virtual object;
a second display module 1202, configured to display a prop placement range in the virtual environment screen when a first trigger operation on the prop placement control is received;
a third display module 1203, configured to, when a second trigger operation on the prop placement control is received, display a virtual prop at a target position indicated by the second trigger operation, where the target position is located in the prop placement range;
a first control module 1204, configured to reduce a life value of a target object and reduce a moving speed of the target object if the virtual item is triggered by the target object.
Optionally, the second display module 1202 includes:
the acquiring unit is used for acquiring a first position of the virtual object in the virtual environment when the first triggering operation on the prop placing control is received;
and the first display unit is used for displaying a prop placement range map in the virtual environment picture according to the first position, and the prop placement range map is used for representing the prop placement range.
Optionally, the prop placement range is a closed range enclosing the first position circumferential side;
optionally, the apparatus further comprises:
the second control module is used for controlling a camera model in the virtual environment to move from a first shooting position to a second shooting position, wherein when the camera model is located at the second shooting position, areas of the virtual object in all directions in the virtual environment picture are visible;
optionally, the apparatus further comprises:
and the third control module is used for controlling the camera model in the virtual environment to move from the second shooting position to the first shooting position.
Optionally, the second display module 1202 further includes:
and the second display unit is used for displaying and displaying the trigger range in which the virtual prop is placed in the prop placing range if the prop placing range contains the placed virtual prop.
Optionally, a candidate placement area is displayed in the prop placement range;
optionally, the third display module 1203 includes:
the receiving unit is used for receiving dragging operation of the prop placing control;
the adjusting unit is used for adjusting the position of the candidate placement area in the prop placement range according to the dragging operation;
the first determining unit is used for determining that the second trigger operation is received when the touch operation on the prop placement control is stopped, and determining the current position of the candidate placement area as the target position;
a third display unit for displaying the virtual item at the target position.
Optionally, the third display unit is further configured to:
displaying a prop model corresponding to the virtual prop at the target position;
and setting a collision detector corresponding to the prop model at the target position, wherein the collision detector is used for determining that the virtual prop is triggered when collision is detected, the collision detector is invisible, and the size of the collision detector is larger than or equal to that of the prop model.
Optionally, the third display unit is further configured to:
displaying a setting animation corresponding to the virtual prop at the target position;
and when the set animation display is finished, displaying and displaying the prop model at the target position, wherein the virtual prop is not effective in the set animation display process.
Optionally, the first control module 1204 includes:
a second determining unit, configured to determine, if the virtual item is triggered by the target object, each object in an item effect effective range corresponding to the virtual item, where the item effect effective range is greater than or equal to a trigger range of the virtual item;
and the control unit is used for reducing the life value of each object in the prop effect effective range and reducing the moving speed of each object in the prop effect effective range.
To sum up, in this embodiment of the application, a virtual environment picture and a prop placement control are displayed on a user interface, when a first trigger operation on the prop placement control is received, a prop placement range is displayed in the virtual environment picture, and when a second trigger operation on the prop placement control is received, a virtual prop is displayed at a target position indicated by the second trigger operation, so that when the virtual prop is triggered by a target object, a life value of the target object is reduced, and a moving speed of the target object is reduced. The virtual property is placed in advance by controlling the virtual object, the target object is subjected to preset damage by the virtual property, and compared with a mode that the virtual object can only be controlled to use the virtual property to attack other virtual objects in real time in the related art, the mode that the virtual object is placed by controlling the virtual object in advance is beneficial to enriching the attack mode of the game, so that the reality of the game is improved.
Referring to fig. 13, a block diagram of a terminal 1300 according to an exemplary embodiment of the present application is shown. The terminal 1300 may be a portable mobile terminal such as: smart phones, tablet computers, MP3 players, MP4 players. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may also include an AI processor to process computational operations related to machine learning.
The memory 1302 may include one or more computer-readable storage media, which may be tangible and non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer-readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement a method as provided by embodiments of the present application.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The touch display 1305 is used to display a UI. The UI may include graphics, text, icons, video, and any combination thereof. The touch display 1305 also has the capability to collect touch signals on or over the surface of the touch display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. The touch display 1305 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, touch display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, touch display 1305 may be at least two, either on different surfaces of terminal 1300 or in a folded design; in still other embodiments, touch display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even more, the touch screen 1305 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The touch display 1305 may be made of LCD (Liquid crystal display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 is used to provide an audio interface between the user and the terminal 1300. The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 to implement navigation or LBS (location based Service). The positioning component 1308 can be a positioning component based on the GPS (global positioning System) of the united states, the beidou System of china, or the galileo System of russia.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying touch display 1305. When the pressure sensor 1313 is provided on the side frame of the terminal 1300, a user's grip signal on the terminal 1300 can be detected, and left-right hand recognition or shortcut operation can be performed based on the grip signal. When the pressure sensor 1313 is disposed on the lower layer of the touch display 1305, it is possible to control an operability control on the UI interface according to a pressure operation of the user on the touch display 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user to identify the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect the ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front face of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the processor 1301 controls the touch display 1305 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually decreases; the touch display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually becomes larger.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
The present application further provides a computer-readable storage medium, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by a processor to implement the method for controlling a virtual object to use a prop according to any of the foregoing embodiments.
The application also provides a computer program product, which when running on a server, causes the computer to execute the method for controlling the virtual object to use the prop provided by the above method embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of controlling a prop used by a virtual object, the method comprising:
displaying a user interface, wherein the user interface comprises a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing a virtual environment from the visual angle of a virtual object;
when a first trigger operation on the prop placement control is received, displaying a prop placement range in the virtual environment picture;
when a second trigger operation on the prop placement control is received, displaying a virtual prop at a target position indicated by the second trigger operation, wherein the target position is located in the prop placement range;
and if the virtual prop is triggered by a target object, reducing the life value of the target object and reducing the moving speed of the target object.
2. The method according to claim 1, wherein the displaying a prop placement range in the virtual environment screen when receiving a first trigger operation on the prop placement control comprises:
when the first trigger operation on the prop placement control is received, acquiring a first position of the virtual object in the virtual environment;
and displaying a prop placement range map in the virtual environment picture according to the first position, wherein the prop placement range map is used for representing the prop placement range.
3. The method of claim 2, wherein the prop placement area is a closed area surrounding the first location;
before displaying the item placement range map in the virtual environment picture according to the first position, the method further comprises:
controlling a camera model in the virtual environment to move from a first shooting position to a second shooting position, wherein when the camera model is located at the second shooting position, areas of the virtual object in all directions in the virtual environment picture are visible;
after the virtual item is displayed at the target location indicated by the second trigger operation, the method further comprises:
and controlling the camera model in the virtual environment to move from the second shooting position to the first shooting position.
4. The method according to any one of claims 1 to 3, wherein the displaying of the placement range of the prop in the virtual environment screen further comprises:
and if the item placement range contains the placed virtual item, displaying and displaying the trigger range of the placed virtual item in the item placement range.
5. The method according to any one of claims 1 to 3, wherein a candidate placement area is displayed in the prop placement range;
when a second trigger operation on the prop placement control is received, displaying the virtual prop at a target position indicated by the second trigger operation, wherein the method comprises the following steps:
receiving a dragging operation of the prop placing control;
adjusting the position of the candidate placement area in the prop placement range according to the dragging operation;
when the touch operation on the prop placement control is stopped, determining that the second trigger operation is received, and determining the current position of the candidate placement area as the target position;
displaying the virtual prop at the target location.
6. The method of claim 5, wherein said displaying the virtual prop at the target location comprises:
displaying a prop model corresponding to the virtual prop at the target position;
and setting a collision detector corresponding to the prop model at the target position, wherein the collision detector is used for determining that the virtual prop is triggered when collision is detected, the collision detector is invisible, and the size of the collision detector is larger than or equal to that of the prop model.
7. The method of claim 6, wherein the displaying the item model corresponding to the virtual item at the target location comprises:
displaying a setting animation corresponding to the virtual prop at the target position;
and when the set animation display is finished, displaying and displaying the prop model at the target position, wherein the virtual prop is not effective in the set animation display process.
8. The method of any of claims 1 to 3, wherein after displaying the virtual prop at the target location indicated by the second trigger operation, the method further comprises:
when the virtual prop meets a disappearing condition, stopping displaying the virtual prop at the target position, wherein the disappearing condition comprises at least one of the following: the placing time of the virtual prop reaches a time threshold or the triggering times of the virtual prop reach a time threshold.
9. The method according to any one of claims 1 to 3, wherein if the virtual item is triggered by a target object, decreasing the life value of the target object and decreasing the moving speed of the target object comprises:
if the virtual prop is triggered by the target object, determining each object in a prop effect effective range corresponding to the virtual prop, wherein the prop effect effective range is larger than or equal to the triggering range of the virtual prop;
and reducing the life value of each object in the effective range of the prop effect, and reducing the moving speed of each object in the effective range of the prop effect.
10. An apparatus for controlling a prop used by a virtual object, the apparatus comprising:
the system comprises a first display module, a second display module and a third display module, wherein the first display module is used for displaying a user interface, the user interface comprises a virtual environment picture and a prop placement control, and the virtual environment picture is a picture for observing a virtual environment from a visual angle of a virtual object;
the second display module is used for displaying a prop placing range in the virtual environment picture when receiving a first trigger operation on the prop placing control;
the third display module is used for displaying the virtual prop at a target position indicated by second trigger operation when the second trigger operation on the prop placing control is received, wherein the target position is located in the prop placing range;
and the first control module is used for reducing the life value of the target object and reducing the moving speed of the target object if the virtual prop is triggered by the target object.
11. The apparatus of claim 10, wherein the second display module comprises:
the acquiring unit is used for acquiring a first position of the virtual object in the virtual environment when the first triggering operation on the prop placing control is received;
and the first display unit is used for displaying a prop placement range map in the virtual environment picture according to the first position, and the prop placement range map is used for representing the prop placement range.
12. The device of claim 11, wherein the prop placement area is a closed area surrounding the first location;
the device further comprises:
the second control module is used for controlling a camera model in the virtual environment to move from a first shooting position to a second shooting position, wherein when the camera model is located at the second shooting position, areas of the virtual object in all directions in the virtual environment picture are visible;
the device further comprises:
and the third control module is used for controlling the camera model in the virtual environment to move from the second shooting position to the first shooting position.
13. The apparatus of any of claims 10 to 13, wherein the second display module further comprises:
and the second display unit is used for displaying and displaying the trigger range in which the virtual prop is placed in the prop placing range if the prop placing range contains the placed virtual prop.
14. A terminal, characterized in that the terminal comprises: a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement a method of controlling the use of a prop by a virtual object as claimed in any one of claims 1 to 9.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of controlling the use of a prop by a virtual object as claimed in any one of claims 1 to 9.
CN201910995534.5A 2019-10-18 2019-10-18 Method, device, terminal and storage medium for controlling virtual object to use prop Pending CN110694273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910995534.5A CN110694273A (en) 2019-10-18 2019-10-18 Method, device, terminal and storage medium for controlling virtual object to use prop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910995534.5A CN110694273A (en) 2019-10-18 2019-10-18 Method, device, terminal and storage medium for controlling virtual object to use prop

Publications (1)

Publication Number Publication Date
CN110694273A true CN110694273A (en) 2020-01-17

Family

ID=69200660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910995534.5A Pending CN110694273A (en) 2019-10-18 2019-10-18 Method, device, terminal and storage medium for controlling virtual object to use prop

Country Status (1)

Country Link
CN (1) CN110694273A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111298441A (en) * 2020-01-21 2020-06-19 腾讯科技(深圳)有限公司 Using method, device, equipment and storage medium of virtual prop
CN111330277A (en) * 2020-03-03 2020-06-26 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
CN111420402A (en) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, terminal and storage medium
CN111589125A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method and device, computer equipment and storage medium
CN111589149A (en) * 2020-05-15 2020-08-28 腾讯科技(深圳)有限公司 Using method, device, equipment and storage medium of virtual prop
CN111589145A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual article display method, device, terminal and storage medium
CN111701244A (en) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic equipment
CN112044084A (en) * 2020-09-04 2020-12-08 腾讯科技(深圳)有限公司 Virtual item control method, device, storage medium and equipment in virtual environment
CN112169338A (en) * 2020-10-15 2021-01-05 网易(杭州)网络有限公司 Control method and device for sphere motion, storage medium and computer equipment
CN112286362A (en) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 Method, system and storage medium for displaying virtual prop in real environment picture
WO2022143142A1 (en) * 2020-12-30 2022-07-07 腾讯科技(深圳)有限公司 Control method and apparatus for human-computer interaction interface, device, and medium
JP2023066519A (en) * 2021-10-29 2023-05-16 グリー株式会社 Information processing system, information processing method, and computer program
US11989811B2 (en) 2021-10-29 2024-05-21 Gree, Inc. Information processing system, information processing method, and computer program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150157940A1 (en) * 2013-12-11 2015-06-11 Activision Publishing, Inc. System and method for playing video games on touchscreen-based devices
JP6244445B1 (en) * 2016-12-22 2017-12-06 株式会社コロプラ Information processing method, apparatus, and program for causing computer to execute information processing method
CN108815848A (en) * 2018-05-31 2018-11-16 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN108854068A (en) * 2018-06-27 2018-11-23 网易(杭州)网络有限公司 Display control method and device, storage medium and terminal in game

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150157940A1 (en) * 2013-12-11 2015-06-11 Activision Publishing, Inc. System and method for playing video games on touchscreen-based devices
JP6244445B1 (en) * 2016-12-22 2017-12-06 株式会社コロプラ Information processing method, apparatus, and program for causing computer to execute information processing method
CN108815848A (en) * 2018-05-31 2018-11-16 腾讯科技(深圳)有限公司 Virtual objects display methods, device, electronic device and storage medium
CN108854068A (en) * 2018-06-27 2018-11-23 网易(杭州)网络有限公司 Display control method and device, storage medium and terminal in game

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
只言片语SAMA: "【王者荣耀】黄忠全方位经典解说视频教学", 《爱奇艺视频》 *
夜清凉夏伟志: "王者荣耀黄忠二技能地雷怎么放?看视频!", 《爱奇艺视频》 *
林晓舒、艺心益意HARRY、帝宸清羽、闫毅航、波波: "为什么守望先锋中一些英雄释放技能时会切换到第三人称视角?", 《知乎HTTPS://WWW.ZHIHU.COM/QUESTION/48133468》 *
睡不醒的某某阳,SKYREACH-: "守望先锋相关资料参考", 《哔哩哔哩》 *
硫酸、W牧之、小白上王者、17173: "黄忠二技能介绍资料汇总", 《网络资料》 *
科技速读: "王者荣耀黄忠二技能地雷怎么放,XPLAY6实测演示!", 《腾讯视频》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111298441A (en) * 2020-01-21 2020-06-19 腾讯科技(深圳)有限公司 Using method, device, equipment and storage medium of virtual prop
CN111330277A (en) * 2020-03-03 2020-06-26 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium
CN111420402A (en) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, terminal and storage medium
CN111420402B (en) * 2020-03-18 2021-05-14 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, terminal and storage medium
CN111589145A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual article display method, device, terminal and storage medium
CN111589125A (en) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 Virtual object control method and device, computer equipment and storage medium
CN111589125B (en) * 2020-04-22 2022-06-10 腾讯科技(深圳)有限公司 Virtual object control method and device, computer equipment and storage medium
CN111589149A (en) * 2020-05-15 2020-08-28 腾讯科技(深圳)有限公司 Using method, device, equipment and storage medium of virtual prop
JP7438378B2 (en) 2020-05-15 2024-02-26 テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド Virtual item display method, device, equipment and computer program
CN111701244A (en) * 2020-06-18 2020-09-25 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic equipment
CN111701244B (en) * 2020-06-18 2021-06-08 腾讯科技(深圳)有限公司 Control method and device of virtual prop, storage medium and electronic equipment
CN112044084B (en) * 2020-09-04 2022-06-28 腾讯科技(深圳)有限公司 Virtual item control method, device, storage medium and equipment in virtual environment
CN112044084A (en) * 2020-09-04 2020-12-08 腾讯科技(深圳)有限公司 Virtual item control method, device, storage medium and equipment in virtual environment
CN112169338A (en) * 2020-10-15 2021-01-05 网易(杭州)网络有限公司 Control method and device for sphere motion, storage medium and computer equipment
CN112169338B (en) * 2020-10-15 2024-06-11 网易(杭州)网络有限公司 Sphere motion control method and device, storage medium and computer equipment
CN112286362A (en) * 2020-11-16 2021-01-29 Oppo广东移动通信有限公司 Method, system and storage medium for displaying virtual prop in real environment picture
WO2022143142A1 (en) * 2020-12-30 2022-07-07 腾讯科技(深圳)有限公司 Control method and apparatus for human-computer interaction interface, device, and medium
JP2023066519A (en) * 2021-10-29 2023-05-16 グリー株式会社 Information processing system, information processing method, and computer program
JP7333564B2 (en) 2021-10-29 2023-08-25 グリー株式会社 Information processing system, information processing method and computer program
US11989811B2 (en) 2021-10-29 2024-05-21 Gree, Inc. Information processing system, information processing method, and computer program

Similar Documents

Publication Publication Date Title
CN110694261B (en) Method, terminal and storage medium for controlling virtual object to attack
CN111589131B (en) Control method, device, equipment and medium of virtual role
CN110413171B (en) Method, device, equipment and medium for controlling virtual object to perform shortcut operation
CN110694273A (en) Method, device, terminal and storage medium for controlling virtual object to use prop
CN111265869B (en) Virtual object detection method, device, terminal and storage medium
CN110448891B (en) Method, device and storage medium for controlling virtual object to operate remote virtual prop
CN109529319B (en) Display method and device of interface control and storage medium
CN110755841B (en) Method, device and equipment for switching props in virtual environment and readable storage medium
CN110613938B (en) Method, terminal and storage medium for controlling virtual object to use virtual prop
CN111589128B (en) Operation control display method and device based on virtual scene
CN111414080B (en) Method, device and equipment for displaying position of virtual object and storage medium
CN111035918A (en) Reconnaissance interface display method and device based on virtual environment and readable storage medium
CN110721469B (en) Method, terminal and medium for shielding virtual object in virtual environment
CN111420402B (en) Virtual environment picture display method, device, terminal and storage medium
CN110585695B (en) Method, apparatus, device and medium for using near-war property in virtual environment
CN111481934B (en) Virtual environment picture display method, device, equipment and storage medium
CN111589146A (en) Prop operation method, device, equipment and storage medium based on virtual environment
CN111389005B (en) Virtual object control method, device, equipment and storage medium
CN111589136B (en) Virtual object control method and device, computer equipment and storage medium
CN111672106B (en) Virtual scene display method and device, computer equipment and storage medium
CN110448908B (en) Method, device and equipment for applying sighting telescope in virtual environment and storage medium
CN111589127A (en) Control method, device and equipment of virtual role and storage medium
CN113577765B (en) User interface display method, device, equipment and storage medium
CN113041620B (en) Method, device, equipment and storage medium for displaying position mark
CN113289331A (en) Display method and device of virtual prop, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020241

Country of ref document: HK